Add HPSS-specific logic to SwiftOnFile, and kick this fork off!
This commit is contained in:
parent
55857a2e21
commit
4a578f8c93
14
.gitreview
14
.gitreview
@ -1,6 +1,8 @@
|
|||||||
[gerrit]
|
# TODO: get ourselves a nice and shiny CI system like this
|
||||||
host=review.openstack.org
|
|
||||||
port=29418
|
#[gerrit]
|
||||||
project=openstack/swiftonfile.git
|
#host=review.openstack.org
|
||||||
defaultbranch=master
|
#port=29418
|
||||||
defaultremote=gerrit
|
#project=openstack/swiftonfile.git
|
||||||
|
#defaultbranch=master
|
||||||
|
#defaultremote=gerrit
|
||||||
|
@ -26,7 +26,7 @@ else
|
|||||||
cover_branches="--cover-branches --cover-html --cover-html-dir=$TOP_DIR/cover"
|
cover_branches="--cover-branches --cover-html --cover-html-dir=$TOP_DIR/cover"
|
||||||
fi
|
fi
|
||||||
cd $TOP_DIR/test/unit
|
cd $TOP_DIR/test/unit
|
||||||
nosetests -v --exe --with-coverage --cover-package swiftonfile --cover-erase $cover_branches $@
|
nosetests -v --exe --with-coverage --cover-package swiftonhpss --cover-erase $cover_branches $@
|
||||||
rvalue=$?
|
rvalue=$?
|
||||||
rm -f .coverage
|
rm -f .coverage
|
||||||
cd -
|
cd -
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
include README.md
|
include README.md
|
||||||
include .functests .unittests tox.ini requirements.txt test-requirements.txt
|
include .functests .unittests tox.ini requirements.txt test-requirements.txt
|
||||||
include makerpm.sh pkgconfig.py swiftonfile.spec
|
include makerpm.sh pkgconfig.py swiftonhpss.spec
|
||||||
graft doc
|
graft doc
|
||||||
graft etc
|
graft etc
|
||||||
graft test
|
graft test
|
||||||
|
55
README.md
55
README.md
@ -1,30 +1,31 @@
|
|||||||
[![Build Status](https://travis-ci.org/swiftonfile/swiftonfile.svg?branch=master)](https://travis-ci.org/swiftonfile/swiftonfile)
|
[![Build Status](https://travis-ci.org/swiftonfile/swiftonfile.svg?branch=master)](https://travis-ci.org/swiftonfile/swiftonfile)
|
||||||
|
|
||||||
# Swift-on-File
|
# Swift-on-HPSS
|
||||||
Swift-on-File is a Swift Object Server implementation that enables users to
|
Swift-on-HPSS is a fork of the Swift-on-File Swift Object Server implementation
|
||||||
access the same data, both as an object and as a file. Data can be stored and
|
that enables users to access the same data, both as an object and as a file.
|
||||||
retrieved through Swift's REST interface or as files from NAS interfaces
|
Data can be stored and retrieved through Swift's REST interface or as files from
|
||||||
including native GlusterFS, GPFS, NFS and CIFS.
|
your site's HPSS archive system.
|
||||||
|
|
||||||
Swift-on-File is to be deployed as a Swift [storage policy](http://docs.openstack.org/developer/swift/overview_policies.html),
|
Swift-on-HPSS is to be deployed as a Swift [storage policy](http://docs.openstack.org/developer/swift/overview_policies.html),
|
||||||
which provides the advantages of being able to extend an existing Swift cluster
|
which provides the advantages of being able to extend an existing Swift cluster
|
||||||
and also migrating data to and from policies with different storage backends.
|
and also migrating data to and from policies with different storage backends.
|
||||||
|
|
||||||
The main difference from the default Swift Object Server is that Swift-on-File
|
The main difference from the default Swift Object Server is that Swift-on-HPSS
|
||||||
stores objects following the same path hierarchy as the object's URL. In contrast,
|
stores objects following the same path hierarchy as the object's URL. In contrast,
|
||||||
the default Swift implementation stores the object following the mapping given
|
the default Swift implementation stores the object following the mapping given
|
||||||
by the Ring, and its final file path is unkown to the user.
|
by the Ring, and its final file path is unknown to the user.
|
||||||
|
|
||||||
For example, an object with URL: `https://swift.example.com/v1/acct/cont/obj`,
|
For example, an object with URL: `https://swift.example.com/v1/acct/cont/obj`,
|
||||||
would be stored the following way by the two systems:
|
would be stored the following way by the two systems:
|
||||||
* Swift: `/mnt/sdb1/2/node/sdb2/objects/981/f79/f566bd022b9285b05e665fd7b843bf79/1401254393.89313.data`
|
* Swift: `/mnt/sdb1/2/node/sdb2/objects/981/f79/f566bd022b9285b05e665fd7b843bf79/1401254393.89313.data`
|
||||||
* SoF: `/mnt/swiftonfile/acct/cont/obj`
|
* SwiftOnHPSS: `/mnt/swiftonhpss/acct/cont/obj`
|
||||||
|
|
||||||
## Use cases
|
## Use cases
|
||||||
Swift-on-File can be especially useful in cases where access over multiple
|
Swift-on-HPSS can be especially useful in cases where access over multiple
|
||||||
protocols is desired. For example, imagine a deployment where video files
|
protocols is desired. For example, imagine a deployment where collected weather
|
||||||
are uploaded as objects over Swift's REST interface and a legacy video transcoding
|
datasets are uploaded as objects over Swift's REST interface and existing weather
|
||||||
software access those videos as files.
|
modelling software can pull this archived weather data using any number of interfaces
|
||||||
|
HPSS already supports.
|
||||||
|
|
||||||
Along the same lines, data can be ingested over Swift's REST interface and then
|
Along the same lines, data can be ingested over Swift's REST interface and then
|
||||||
analytic software like Hadoop can operate directly on the data without having to
|
analytic software like Hadoop can operate directly on the data without having to
|
||||||
@ -37,15 +38,7 @@ Similarly, scientific applications may process file data and then select some or
|
|||||||
of the data to publish to outside users through the swift interface.
|
of the data to publish to outside users through the swift interface.
|
||||||
|
|
||||||
## Limitations and Future plans
|
## Limitations and Future plans
|
||||||
Swift-On-File currently works only with Filesystems with extended attributes
|
Currently, files added over a file interface (e.g., PFTP or FUSE), do not show
|
||||||
support. It is also recommended that these Filesystems provide data durability
|
|
||||||
as Swift-On-File should not use Swift's replication mechanisms.
|
|
||||||
|
|
||||||
GlusterFS and GPFS are good examples of Filesystems that work well with Swift-on-File.
|
|
||||||
Both provide a posix interface, global namespace, scalability, data replication
|
|
||||||
and support for extended attributes.
|
|
||||||
|
|
||||||
Currently, files added over a file interface (e.g., native GlusterFS), do not show
|
|
||||||
up in container listings, still those files would be accessible over Swift's REST
|
up in container listings, still those files would be accessible over Swift's REST
|
||||||
interface with a GET request. We are working to provide a solution to this limitation.
|
interface with a GET request. We are working to provide a solution to this limitation.
|
||||||
|
|
||||||
@ -53,16 +46,17 @@ There is also subtle but very important difference in the implementation of
|
|||||||
[last write wins](doc/markdown/last_write_wins.md) behaviour when compared to
|
[last write wins](doc/markdown/last_write_wins.md) behaviour when compared to
|
||||||
OpenStack Swift.
|
OpenStack Swift.
|
||||||
|
|
||||||
Because Swift-On-File relies on the data replication support of the filesystem the Swift
|
Because Swift-On-HPSS relies on the data replication support of HPSS
|
||||||
Object replicator process does not have any role for containers using the Swift-on-File
|
(dual/quad copy, RAIT, etc) the Swift Object replicator process does not have
|
||||||
storage policy. This means that Swift geo replication is not available to objects in
|
any role for containers using the Swift-on-HPSS storage policy.
|
||||||
in containers using the Swift-on-File storage policy. Multi-site replication for these
|
This means that Swift geo replication is not available to objects in
|
||||||
objects must be provided by the filesystem.
|
in containers using the Swift-on-HPSS storage policy.
|
||||||
|
Multi-site replication for these objects must be provided by your HPSS
|
||||||
Future plans includes adding support for Filesystems without extended attributes,
|
configuration.
|
||||||
which should extend the ability to migrate data for legacy storage systems.
|
|
||||||
|
|
||||||
## Get involved:
|
## Get involved:
|
||||||
|
|
||||||
|
(TODO: write specifics for Swift-on-HPSS)
|
||||||
To learn more about Swift-On-File, you can watch the presentation given at
|
To learn more about Swift-On-File, you can watch the presentation given at
|
||||||
the Paris OpenStack Summit: [Deploying Swift on a File System](http://youtu.be/vPn2uZF4yWo).
|
the Paris OpenStack Summit: [Deploying Swift on a File System](http://youtu.be/vPn2uZF4yWo).
|
||||||
The Paris presentation slides can be found [here](https://github.com/thiagol11/openstack-fall-summit-2014)
|
The Paris presentation slides can be found [here](https://github.com/thiagol11/openstack-fall-summit-2014)
|
||||||
@ -75,5 +69,6 @@ or work directly on the code. You can file bugs or blueprints on [launchpad](htt
|
|||||||
or find us in the #swiftonfile channel on Freenode.
|
or find us in the #swiftonfile channel on Freenode.
|
||||||
|
|
||||||
# Guides to get started:
|
# Guides to get started:
|
||||||
|
(TODO: modify these guides with Swift-on-HPSS specifics)
|
||||||
1. [Quick Start Guide with XFS/GlusterFS](doc/markdown/quick_start_guide.md)
|
1. [Quick Start Guide with XFS/GlusterFS](doc/markdown/quick_start_guide.md)
|
||||||
2. [Developer Guide](doc/markdown/dev_guide.md)
|
2. [Developer Guide](doc/markdown/dev_guide.md)
|
||||||
|
@ -25,7 +25,7 @@ import cPickle as pickle
|
|||||||
import multiprocessing
|
import multiprocessing
|
||||||
|
|
||||||
from optparse import OptionParser
|
from optparse import OptionParser
|
||||||
from swiftonfile.swift.common.utils import write_metadata, SafeUnpickler, \
|
from swiftonhpss.swift.common.utils import write_metadata, SafeUnpickler, \
|
||||||
METADATA_KEY, MAX_XATTR_SIZE
|
METADATA_KEY, MAX_XATTR_SIZE
|
||||||
|
|
||||||
|
|
@ -19,7 +19,7 @@ import pprint
|
|||||||
import os
|
import os
|
||||||
import json
|
import json
|
||||||
from optparse import OptionParser
|
from optparse import OptionParser
|
||||||
from swiftonfile.swift.common.utils import read_metadata
|
from swiftonhpss.swift.common.utils import read_metadata
|
||||||
|
|
||||||
# Parser Setup
|
# Parser Setup
|
||||||
USAGE = "Usage: %prog [options] OBJECT"
|
USAGE = "Usage: %prog [options] OBJECT"
|
@ -1,5 +1,8 @@
|
|||||||
# Developer Guide
|
# Developer Guide
|
||||||
|
|
||||||
|
This guide is for SwiftOnFile development. IBM does not actually have a Jenkins
|
||||||
|
or Gerrit set up yet for SwiftOnHPSS.
|
||||||
|
|
||||||
## Development Environment Setup
|
## Development Environment Setup
|
||||||
The workflow for SwiftOnFile is largely based upon the [OpenStack Gerrit Workflow][].
|
The workflow for SwiftOnFile is largely based upon the [OpenStack Gerrit Workflow][].
|
||||||
|
|
||||||
|
@ -1,5 +1,7 @@
|
|||||||
# Quick Start Guide
|
# Quick Start Guide
|
||||||
|
|
||||||
|
TODO: Update this for SwiftOnHPSS specifics!
|
||||||
|
|
||||||
## Contents
|
## Contents
|
||||||
* [Overview](#overview)
|
* [Overview](#overview)
|
||||||
* [System Setup](#system_setup)
|
* [System Setup](#system_setup)
|
||||||
|
@ -32,7 +32,7 @@ workers = 1
|
|||||||
pipeline = object-server
|
pipeline = object-server
|
||||||
|
|
||||||
[app:object-server]
|
[app:object-server]
|
||||||
use = egg:swiftonfile#object
|
use = egg:swiftonhpss#object
|
||||||
user = <your-user-name>
|
user = <your-user-name>
|
||||||
log_facility = LOG_LOCAL2
|
log_facility = LOG_LOCAL2
|
||||||
log_level = WARN
|
log_level = WARN
|
||||||
|
@ -79,10 +79,10 @@ default = yes
|
|||||||
#ec_num_parity_fragments = 4
|
#ec_num_parity_fragments = 4
|
||||||
#ec_object_segment_size = 1048576
|
#ec_object_segment_size = 1048576
|
||||||
|
|
||||||
# The following section defines a policy called 'swiftonfile' to be used by
|
# The following section defines a policy called 'swiftonhpss' to be used by
|
||||||
# swiftonfile object-server implementation.
|
# swiftonhpss object-server implementation.
|
||||||
[storage-policy:3]
|
[storage-policy:3]
|
||||||
name = swiftonfile
|
name = swiftonhpss
|
||||||
policy_type = replication
|
policy_type = replication
|
||||||
|
|
||||||
# The swift-constraints section sets the basic constraints on data
|
# The swift-constraints section sets the basic constraints on data
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
# Simple script to create RPMs for G4S
|
# Simple script to create RPMs for G4S
|
||||||
|
|
||||||
## RPM NAME
|
## RPM NAME
|
||||||
RPMNAME=swiftonfile
|
RPMNAME=swiftonhpss
|
||||||
|
|
||||||
cleanup()
|
cleanup()
|
||||||
{
|
{
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
# Simple program to save all package information
|
# Simple program to save all package information
|
||||||
# into a file which can be sourced by a bash script
|
# into a file which can be sourced by a bash script
|
||||||
|
|
||||||
from swiftonfile.swift import _pkginfo as pkginfo
|
from swiftonhpss.swift import _pkginfo as pkginfo
|
||||||
|
|
||||||
PKGCONFIG = 'pkgconfig.in'
|
PKGCONFIG = 'pkgconfig.in'
|
||||||
|
|
||||||
|
@ -10,3 +10,8 @@ pastedeploy>=1.3.3
|
|||||||
simplejson>=2.0.9
|
simplejson>=2.0.9
|
||||||
xattr>=0.4
|
xattr>=0.4
|
||||||
PyECLib==1.0.7
|
PyECLib==1.0.7
|
||||||
|
|
||||||
|
# HPSS-specific package requirements. Get these from your HPSS support
|
||||||
|
# representative.
|
||||||
|
hpss
|
||||||
|
hpssfs
|
19
setup.py
19
setup.py
@ -14,21 +14,20 @@
|
|||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
from setuptools import setup, find_packages
|
from setuptools import setup, find_packages
|
||||||
from swiftonfile.swift import _pkginfo
|
from swiftonhpss.swift import _pkginfo
|
||||||
|
|
||||||
|
|
||||||
setup(
|
setup(
|
||||||
name=_pkginfo.name,
|
name=_pkginfo.name,
|
||||||
version=_pkginfo.full_version,
|
version=_pkginfo.full_version,
|
||||||
description='SwiftOnFile',
|
description='SwiftOnHPSS',
|
||||||
license='Apache License (2.0)',
|
license='Apache License (2.0)',
|
||||||
author='Red Hat, Inc.',
|
author='IBM & Red Hat, Inc.',
|
||||||
author_email='gluster-users@gluster.org',
|
url='https://github.com/hpss-collaboration/swiftonhpss',
|
||||||
url='https://github.com/openstack/swiftonfile',
|
|
||||||
packages=find_packages(exclude=['test', 'bin']),
|
packages=find_packages(exclude=['test', 'bin']),
|
||||||
test_suite='nose.collector',
|
test_suite='nose.collector',
|
||||||
classifiers=[
|
classifiers=[
|
||||||
'Development Status :: 5 - Production/Stable'
|
'Development Status :: 2 - Pre-Alpha'
|
||||||
'Environment :: OpenStack'
|
'Environment :: OpenStack'
|
||||||
'Intended Audience :: Information Technology'
|
'Intended Audience :: Information Technology'
|
||||||
'Intended Audience :: System Administrators'
|
'Intended Audience :: System Administrators'
|
||||||
@ -41,15 +40,15 @@ setup(
|
|||||||
],
|
],
|
||||||
install_requires=[],
|
install_requires=[],
|
||||||
scripts=[
|
scripts=[
|
||||||
'bin/swiftonfile-print-metadata',
|
'bin/swiftonhpss-print-metadata',
|
||||||
'bin/swiftonfile-migrate-metadata',
|
'bin/swiftonhpss-migrate-metadata',
|
||||||
],
|
],
|
||||||
entry_points={
|
entry_points={
|
||||||
'paste.app_factory': [
|
'paste.app_factory': [
|
||||||
'object=swiftonfile.swift.obj.server:app_factory',
|
'object=swiftonhpss.swift.obj.server:app_factory',
|
||||||
],
|
],
|
||||||
'paste.filter_factory': [
|
'paste.filter_factory': [
|
||||||
'sof_constraints=swiftonfile.swift.common.middleware.'
|
'sof_constraints=swiftonhpss.swift.common.middleware.'
|
||||||
'check_constraints:filter_factory',
|
'check_constraints:filter_factory',
|
||||||
],
|
],
|
||||||
},
|
},
|
||||||
|
@ -1,112 +0,0 @@
|
|||||||
# Copyright (c) 2012-2014 Red Hat, Inc.
|
|
||||||
#
|
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
# you may not use this file except in compliance with the License.
|
|
||||||
# You may obtain a copy of the License at
|
|
||||||
#
|
|
||||||
# http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
#
|
|
||||||
# Unless required by applicable law or agreed to in writing, software
|
|
||||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
# implied.
|
|
||||||
# See the License for the specific language governing permissions and
|
|
||||||
# limitations under the License.
|
|
||||||
|
|
||||||
""" Object Server for Gluster for Swift """
|
|
||||||
|
|
||||||
from swift.common.swob import HTTPConflict, HTTPNotImplemented
|
|
||||||
from swift.common.utils import public, timing_stats, replication, \
|
|
||||||
config_true_value
|
|
||||||
from swift.common.request_helpers import get_name_and_placement
|
|
||||||
from swiftonfile.swift.common.exceptions import AlreadyExistsAsFile, \
|
|
||||||
AlreadyExistsAsDir
|
|
||||||
from swift.common.request_helpers import split_and_validate_path
|
|
||||||
|
|
||||||
from swift.obj import server
|
|
||||||
|
|
||||||
from swiftonfile.swift.obj.diskfile import DiskFileManager
|
|
||||||
from swiftonfile.swift.common.constraints import check_object_creation
|
|
||||||
from swiftonfile.swift.common import utils
|
|
||||||
|
|
||||||
|
|
||||||
class SwiftOnFileDiskFileRouter(object):
|
|
||||||
"""
|
|
||||||
Replacement for Swift's DiskFileRouter object.
|
|
||||||
Always returns SwiftOnFile's DiskFileManager implementation.
|
|
||||||
"""
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
self.manager_cls = DiskFileManager(*args, **kwargs)
|
|
||||||
|
|
||||||
def __getitem__(self, policy):
|
|
||||||
return self.manager_cls
|
|
||||||
|
|
||||||
|
|
||||||
class ObjectController(server.ObjectController):
|
|
||||||
"""
|
|
||||||
Subclass of the object server's ObjectController which replaces the
|
|
||||||
container_update method with one that is a no-op (information is simply
|
|
||||||
stored on disk and already updated by virtue of performing the file system
|
|
||||||
operations directly).
|
|
||||||
"""
|
|
||||||
def setup(self, conf):
|
|
||||||
"""
|
|
||||||
Implementation specific setup. This method is called at the very end
|
|
||||||
by the constructor to allow a specific implementation to modify
|
|
||||||
existing attributes or add its own attributes.
|
|
||||||
|
|
||||||
:param conf: WSGI configuration parameter
|
|
||||||
"""
|
|
||||||
# Replaces Swift's DiskFileRouter object reference with ours.
|
|
||||||
self._diskfile_router = SwiftOnFileDiskFileRouter(conf, self.logger)
|
|
||||||
# This conf option will be deprecated and eventualy removed in
|
|
||||||
# future releases
|
|
||||||
utils.read_pickled_metadata = \
|
|
||||||
config_true_value(conf.get('read_pickled_metadata', 'no'))
|
|
||||||
|
|
||||||
@public
|
|
||||||
@timing_stats()
|
|
||||||
def PUT(self, request):
|
|
||||||
try:
|
|
||||||
device, partition, account, container, obj, policy = \
|
|
||||||
get_name_and_placement(request, 5, 5, True)
|
|
||||||
|
|
||||||
# check swiftonfile constraints first
|
|
||||||
error_response = check_object_creation(request, obj)
|
|
||||||
if error_response:
|
|
||||||
return error_response
|
|
||||||
|
|
||||||
# now call swift's PUT method
|
|
||||||
return server.ObjectController.PUT(self, request)
|
|
||||||
except (AlreadyExistsAsFile, AlreadyExistsAsDir):
|
|
||||||
device = \
|
|
||||||
split_and_validate_path(request, 1, 5, True)
|
|
||||||
return HTTPConflict(drive=device, request=request)
|
|
||||||
|
|
||||||
@public
|
|
||||||
@replication
|
|
||||||
@timing_stats(sample_rate=0.1)
|
|
||||||
def REPLICATE(self, request):
|
|
||||||
"""
|
|
||||||
In Swift, this method handles REPLICATE requests for the Swift
|
|
||||||
Object Server. This is used by the object replicator to get hashes
|
|
||||||
for directories.
|
|
||||||
|
|
||||||
Swiftonfile does not support this as it expects the underlying
|
|
||||||
filesystem to take care of replication. Also, swiftonfile has no
|
|
||||||
notion of hashes for directories.
|
|
||||||
"""
|
|
||||||
return HTTPNotImplemented(request=request)
|
|
||||||
|
|
||||||
@public
|
|
||||||
@replication
|
|
||||||
@timing_stats(sample_rate=0.1)
|
|
||||||
def REPLICATION(self, request):
|
|
||||||
return HTTPNotImplemented(request=request)
|
|
||||||
|
|
||||||
|
|
||||||
def app_factory(global_conf, **local_conf):
|
|
||||||
"""paste.deploy app factory for creating WSGI object server apps"""
|
|
||||||
conf = global_conf.copy()
|
|
||||||
conf.update(local_conf)
|
|
||||||
return ObjectController(conf)
|
|
@ -9,7 +9,7 @@ Name : %{_name}
|
|||||||
Version : %{_version}
|
Version : %{_version}
|
||||||
Release : %{_release}%{?dist}
|
Release : %{_release}%{?dist}
|
||||||
Group : Applications/System
|
Group : Applications/System
|
||||||
URL : https://github.com/openstack/swiftonfile
|
URL : https://github.com/hpss-collaboration/swiftonhpss
|
||||||
Source0 : %{_name}-%{_version}-%{_release}.tar.gz
|
Source0 : %{_name}-%{_version}-%{_release}.tar.gz
|
||||||
License : ASL 2.0
|
License : ASL 2.0
|
||||||
BuildArch: noarch
|
BuildArch: noarch
|
||||||
@ -20,10 +20,10 @@ Requires : python-setuptools
|
|||||||
Requires : openstack-swift-object = 2.3.0
|
Requires : openstack-swift-object = 2.3.0
|
||||||
|
|
||||||
%description
|
%description
|
||||||
SwiftOnFile is a Swift Object Server implementation that enables users to
|
SwiftOnHPSS is a Swift Object Server implementation that enables users to
|
||||||
access the same data, both as an object and as a file. Data can be stored
|
access the same data, both as an object and as a file. Data can be stored
|
||||||
and retrieved through Swift's REST interface or as files from NAS interfaces
|
and retrieved through Swift's REST interface or as files from your site's HPSS
|
||||||
including native GlusterFS, GPFS, NFS and CIFS.
|
archive system.
|
||||||
|
|
||||||
%prep
|
%prep
|
||||||
%setup -q -n swiftonfile-%{_version}
|
%setup -q -n swiftonfile-%{_version}
|
||||||
@ -58,6 +58,9 @@ cp -r etc/* %{buildroot}/%{_confdir}/
|
|||||||
rm -rf %{buildroot}
|
rm -rf %{buildroot}
|
||||||
|
|
||||||
%changelog
|
%changelog
|
||||||
|
* Thu Dec 10 2015 Phil Bridges <pgbridge@us.ibm.com>
|
||||||
|
- Fork SwiftOnFile into SwiftOnHPSS, add HPSS-specific features
|
||||||
|
|
||||||
* Wed Jul 15 2015 Prashanth Pai <ppai@redhat.com> - 2.3.0-0
|
* Wed Jul 15 2015 Prashanth Pai <ppai@redhat.com> - 2.3.0-0
|
||||||
- Update spec file to support Kilo release of Swift
|
- Update spec file to support Kilo release of Swift
|
||||||
|
|
@ -13,7 +13,7 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
""" SwiftOnFile """
|
""" SwiftOnHPSS """
|
||||||
|
|
||||||
|
|
||||||
class PkgInfo(object):
|
class PkgInfo(object):
|
||||||
@ -43,6 +43,9 @@ class PkgInfo(object):
|
|||||||
|
|
||||||
|
|
||||||
# Change the Package version here
|
# Change the Package version here
|
||||||
_pkginfo = PkgInfo('2.3.0', '0', 'swiftonfile', False)
|
_pkginfo = PkgInfo(canonical_version='2.3.0',
|
||||||
|
release='0',
|
||||||
|
name='swiftonhpss',
|
||||||
|
final=False)
|
||||||
__version__ = _pkginfo.pretty_version
|
__version__ = _pkginfo.pretty_version
|
||||||
__canonical_version__ = _pkginfo.canonical_version
|
__canonical_version__ = _pkginfo.canonical_version
|
@ -25,7 +25,7 @@ from itertools import repeat
|
|||||||
import ctypes
|
import ctypes
|
||||||
from eventlet import sleep
|
from eventlet import sleep
|
||||||
from swift.common.utils import load_libc_function
|
from swift.common.utils import load_libc_function
|
||||||
from swiftonfile.swift.common.exceptions import SwiftOnFileSystemOSError
|
from swiftonhpss.swift.common.exceptions import SwiftOnFileSystemOSError
|
||||||
from swift.common.exceptions import DiskFileNoSpace
|
from swift.common.exceptions import DiskFileNoSpace
|
||||||
|
|
||||||
|
|
@ -16,10 +16,10 @@
|
|||||||
"""
|
"""
|
||||||
The ``sof_constraints`` middleware should be added to the pipeline in your
|
The ``sof_constraints`` middleware should be added to the pipeline in your
|
||||||
``/etc/swift/proxy-server.conf`` file, and a mapping of storage policies
|
``/etc/swift/proxy-server.conf`` file, and a mapping of storage policies
|
||||||
using the swiftonfile object server should be listed in the 'policies'
|
using the swiftonhpss object server should be listed in the 'policies'
|
||||||
variable in the filter section.
|
variable in the filter section.
|
||||||
|
|
||||||
The swiftonfile constraints contains additional checks to make sure object
|
The swiftonhpss constraints contains additional checks to make sure object
|
||||||
names conform with POSIX filesystems file and directory naming limitations
|
names conform with POSIX filesystems file and directory naming limitations
|
||||||
|
|
||||||
For example::
|
For example::
|
||||||
@ -28,8 +28,8 @@ For example::
|
|||||||
pipeline = catch_errors sof_constraints cache proxy-server
|
pipeline = catch_errors sof_constraints cache proxy-server
|
||||||
|
|
||||||
[filter:sof_constraints]
|
[filter:sof_constraints]
|
||||||
use = egg:swiftonfile#sof_constraints
|
use = egg:swiftonhpss#sof_constraints
|
||||||
policies=swiftonfile,gold
|
policies=swiftonhpss,gold
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from urllib import unquote
|
from urllib import unquote
|
||||||
@ -37,8 +37,8 @@ from swift.common.utils import get_logger
|
|||||||
from swift.common.swob import Request, HTTPBadRequest
|
from swift.common.swob import Request, HTTPBadRequest
|
||||||
from swift.proxy.controllers.base import get_container_info
|
from swift.proxy.controllers.base import get_container_info
|
||||||
from swift.common.storage_policy import POLICIES
|
from swift.common.storage_policy import POLICIES
|
||||||
from swiftonfile.swift.common import constraints
|
from swiftonhpss.swift.common import constraints
|
||||||
from swiftonfile.swift.common.constraints import check_object_creation \
|
from swiftonhpss.swift.common.constraints import check_object_creation \
|
||||||
as sof_check_object_creation
|
as sof_check_object_creation
|
||||||
|
|
||||||
|
|
@ -24,10 +24,10 @@ from eventlet import sleep
|
|||||||
import cPickle as pickle
|
import cPickle as pickle
|
||||||
from cStringIO import StringIO
|
from cStringIO import StringIO
|
||||||
import pickletools
|
import pickletools
|
||||||
from swiftonfile.swift.common.exceptions import SwiftOnFileSystemIOError
|
from swiftonhpss.swift.common.exceptions import SwiftOnFileSystemIOError
|
||||||
from swift.common.exceptions import DiskFileNoSpace
|
from swift.common.exceptions import DiskFileNoSpace
|
||||||
from swift.common.db import utf8encodekeys
|
from swift.common.db import utf8encodekeys
|
||||||
from swiftonfile.swift.common.fs_utils import do_stat, \
|
from swiftonhpss.swift.common.fs_utils import do_stat, \
|
||||||
do_walk, do_rmdir, do_log_rl, get_filename_from_fd, do_open, \
|
do_walk, do_rmdir, do_log_rl, get_filename_from_fd, do_open, \
|
||||||
do_getxattr, do_setxattr, do_removexattr, do_read, \
|
do_getxattr, do_setxattr, do_removexattr, do_read, \
|
||||||
do_close, do_dup, do_lseek, do_fstat, do_fsync, do_rename
|
do_close, do_dup, do_lseek, do_fstat, do_fsync, do_rename
|
@ -23,10 +23,11 @@ except ImportError:
|
|||||||
import random
|
import random
|
||||||
import logging
|
import logging
|
||||||
import time
|
import time
|
||||||
|
import hpssfs
|
||||||
from uuid import uuid4
|
from uuid import uuid4
|
||||||
from eventlet import sleep
|
from eventlet import sleep
|
||||||
from contextlib import contextmanager
|
from contextlib import contextmanager
|
||||||
from swiftonfile.swift.common.exceptions import AlreadyExistsAsFile, \
|
from swiftonhpss.swift.common.exceptions import AlreadyExistsAsFile, \
|
||||||
AlreadyExistsAsDir
|
AlreadyExistsAsDir
|
||||||
from swift.common.utils import ThreadPool, hash_path, \
|
from swift.common.utils import ThreadPool, hash_path, \
|
||||||
normalize_timestamp, fallocate
|
normalize_timestamp, fallocate
|
||||||
@ -35,14 +36,15 @@ from swift.common.exceptions import DiskFileNotExist, DiskFileError, \
|
|||||||
DiskFileExpired
|
DiskFileExpired
|
||||||
from swift.common.swob import multi_range_iterator
|
from swift.common.swob import multi_range_iterator
|
||||||
|
|
||||||
from swiftonfile.swift.common.exceptions import SwiftOnFileSystemOSError
|
from swiftonhpss.swift.common.exceptions import SwiftOnFileSystemOSError, \
|
||||||
from swiftonfile.swift.common.fs_utils import do_fstat, do_open, do_close, \
|
SwiftOnFileSystemIOError
|
||||||
|
from swiftonhpss.swift.common.fs_utils import do_fstat, do_open, do_close, \
|
||||||
do_unlink, do_chown, do_fsync, do_fchown, do_stat, do_write, do_read, \
|
do_unlink, do_chown, do_fsync, do_fchown, do_stat, do_write, do_read, \
|
||||||
do_fadvise64, do_rename, do_fdatasync, do_lseek, do_mkdir
|
do_fadvise64, do_rename, do_fdatasync, do_lseek, do_mkdir
|
||||||
from swiftonfile.swift.common.utils import read_metadata, write_metadata, \
|
from swiftonhpss.swift.common.utils import read_metadata, write_metadata, \
|
||||||
validate_object, create_object_metadata, rmobjdir, dir_is_object, \
|
validate_object, create_object_metadata, rmobjdir, dir_is_object, \
|
||||||
get_object_metadata, write_pickle
|
get_object_metadata, write_pickle
|
||||||
from swiftonfile.swift.common.utils import X_CONTENT_TYPE, \
|
from swiftonhpss.swift.common.utils import X_CONTENT_TYPE, \
|
||||||
X_TIMESTAMP, X_TYPE, X_OBJECT_TYPE, FILE, OBJECT, DIR_TYPE, \
|
X_TIMESTAMP, X_TYPE, X_OBJECT_TYPE, FILE, OBJECT, DIR_TYPE, \
|
||||||
FILE_TYPE, DEFAULT_UID, DEFAULT_GID, DIR_NON_OBJECT, DIR_OBJECT, \
|
FILE_TYPE, DEFAULT_UID, DEFAULT_GID, DIR_NON_OBJECT, DIR_OBJECT, \
|
||||||
X_ETAG, X_CONTENT_LENGTH, X_MTIME
|
X_ETAG, X_CONTENT_LENGTH, X_MTIME
|
||||||
@ -228,7 +230,7 @@ class DiskFileManager(SwiftDiskFileManager):
|
|||||||
|
|
||||||
def pickle_async_update(self, device, account, container, obj, data,
|
def pickle_async_update(self, device, account, container, obj, data,
|
||||||
timestamp, policy):
|
timestamp, policy):
|
||||||
# This method invokes swiftonfile's writepickle method.
|
# This method invokes swiftonhpss's writepickle method.
|
||||||
# Is patching just write_pickle and calling parent method better ?
|
# Is patching just write_pickle and calling parent method better ?
|
||||||
device_path = self.construct_dev_path(device)
|
device_path = self.construct_dev_path(device)
|
||||||
async_dir = os.path.join(device_path, get_async_dir(policy))
|
async_dir = os.path.join(device_path, get_async_dir(policy))
|
||||||
@ -295,7 +297,7 @@ class DiskFileWriter(object):
|
|||||||
df._threadpool.run_in_thread(self._write_entire_chunk, chunk)
|
df._threadpool.run_in_thread(self._write_entire_chunk, chunk)
|
||||||
return self._upload_size
|
return self._upload_size
|
||||||
|
|
||||||
def _finalize_put(self, metadata):
|
def _finalize_put(self, metadata, purgelock=False):
|
||||||
# Write out metadata before fsync() to ensure it is also forced to
|
# Write out metadata before fsync() to ensure it is also forced to
|
||||||
# disk.
|
# disk.
|
||||||
write_metadata(self._fd, metadata)
|
write_metadata(self._fd, metadata)
|
||||||
@ -305,6 +307,16 @@ class DiskFileWriter(object):
|
|||||||
# the pages (now that after fsync the pages will be all
|
# the pages (now that after fsync the pages will be all
|
||||||
# clean).
|
# clean).
|
||||||
do_fsync(self._fd)
|
do_fsync(self._fd)
|
||||||
|
|
||||||
|
# (HPSS) Purge lock the file now if we're asked to.
|
||||||
|
if purgelock:
|
||||||
|
try:
|
||||||
|
hpssfs.ioctl(self._fd, hpssfs.HPSSFS_PURGE_LOCK, int(purgelock))
|
||||||
|
except IOError as err:
|
||||||
|
raise SwiftOnFileSystemIOError(err.errno,
|
||||||
|
'%s, hpssfs.ioct("%s", ...)' % (
|
||||||
|
err.strerror, self._fd))
|
||||||
|
|
||||||
# From the Department of the Redundancy Department, make sure
|
# From the Department of the Redundancy Department, make sure
|
||||||
# we call drop_cache() after fsync() to avoid redundant work
|
# we call drop_cache() after fsync() to avoid redundant work
|
||||||
# (pages all clean).
|
# (pages all clean).
|
||||||
@ -381,12 +393,13 @@ class DiskFileWriter(object):
|
|||||||
# in a thread.
|
# in a thread.
|
||||||
self.close()
|
self.close()
|
||||||
|
|
||||||
def put(self, metadata):
|
def put(self, metadata, purgelock):
|
||||||
"""
|
"""
|
||||||
Finalize writing the file on disk, and renames it from the temp file
|
Finalize writing the file on disk, and renames it from the temp file
|
||||||
to the real location. This should be called after the data has been
|
to the real location. This should be called after the data has been
|
||||||
written to the temp file.
|
written to the temp file.
|
||||||
|
|
||||||
|
:param purgelock: bool flag to signal if purge lock desired
|
||||||
:param metadata: dictionary of metadata to be written
|
:param metadata: dictionary of metadata to be written
|
||||||
:raises AlreadyExistsAsDir : If there exists a directory of the same
|
:raises AlreadyExistsAsDir : If there exists a directory of the same
|
||||||
name
|
name
|
||||||
@ -409,7 +422,8 @@ class DiskFileWriter(object):
|
|||||||
' since the target, %s, already exists'
|
' since the target, %s, already exists'
|
||||||
' as a directory' % df._data_file)
|
' as a directory' % df._data_file)
|
||||||
|
|
||||||
df._threadpool.force_run_in_thread(self._finalize_put, metadata)
|
df._threadpool.force_run_in_thread(self._finalize_put, metadata,
|
||||||
|
purgelock)
|
||||||
|
|
||||||
# Avoid the unlink() system call as part of the mkstemp context
|
# Avoid the unlink() system call as part of the mkstemp context
|
||||||
# cleanup
|
# cleanup
|
||||||
@ -555,7 +569,7 @@ class DiskFile(object):
|
|||||||
Object names ending or beginning with a '/' as in /a, a/, /a/b/,
|
Object names ending or beginning with a '/' as in /a, a/, /a/b/,
|
||||||
etc, or object names with multiple consecutive slashes, like a//b,
|
etc, or object names with multiple consecutive slashes, like a//b,
|
||||||
are not supported. The proxy server's constraints filter
|
are not supported. The proxy server's constraints filter
|
||||||
swiftonfile.common.constrains.check_object_creation() should
|
swiftonhpss.common.constrains.check_object_creation() should
|
||||||
reject such requests.
|
reject such requests.
|
||||||
|
|
||||||
:param mgr: associated on-disk manager instance
|
:param mgr: associated on-disk manager instance
|
||||||
@ -829,7 +843,7 @@ class DiskFile(object):
|
|||||||
return True, newmd
|
return True, newmd
|
||||||
|
|
||||||
@contextmanager
|
@contextmanager
|
||||||
def create(self, size=None):
|
def create(self, size, cos):
|
||||||
"""
|
"""
|
||||||
Context manager to create a file. We create a temporary file first, and
|
Context manager to create a file. We create a temporary file first, and
|
||||||
then return a DiskFileWriter object to encapsulate the state.
|
then return a DiskFileWriter object to encapsulate the state.
|
||||||
@ -840,6 +854,7 @@ class DiskFile(object):
|
|||||||
temporary file again. If we get file name conflict, we'll retry using
|
temporary file again. If we get file name conflict, we'll retry using
|
||||||
different random suffixes 1,000 times before giving up.
|
different random suffixes 1,000 times before giving up.
|
||||||
|
|
||||||
|
:param cos:
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
An implementation is not required to perform on-disk
|
An implementation is not required to perform on-disk
|
||||||
@ -873,6 +888,23 @@ class DiskFile(object):
|
|||||||
try:
|
try:
|
||||||
fd = do_open(tmppath,
|
fd = do_open(tmppath,
|
||||||
os.O_WRONLY | os.O_CREAT | os.O_EXCL | O_CLOEXEC)
|
os.O_WRONLY | os.O_CREAT | os.O_EXCL | O_CLOEXEC)
|
||||||
|
|
||||||
|
if cos:
|
||||||
|
try:
|
||||||
|
hpssfs.ioctl(fd, hpssfs.HPSSFS_SET_COS_HINT, int(cos))
|
||||||
|
except IOError as err:
|
||||||
|
raise SwiftOnFileSystemIOError(err.errno,
|
||||||
|
'%s, hpssfs.ioctl("%s", SET_COS)' % (
|
||||||
|
err.strerror, fd))
|
||||||
|
elif size:
|
||||||
|
try:
|
||||||
|
hpssfs.ioctl(fd, hpssfs.HPSSFS_SET_FSIZE_HINT,
|
||||||
|
long(size))
|
||||||
|
except IOError as err:
|
||||||
|
raise SwiftOnFileSystemIOError(err.errno,
|
||||||
|
'%s, hpssfs.ioctl("%s", SET_FSIZE)' % (
|
||||||
|
err.strerror, fd))
|
||||||
|
|
||||||
except SwiftOnFileSystemOSError as gerr:
|
except SwiftOnFileSystemOSError as gerr:
|
||||||
if gerr.errno in (errno.ENOSPC, errno.EDQUOT):
|
if gerr.errno in (errno.ENOSPC, errno.EDQUOT):
|
||||||
# Raise DiskFileNoSpace to be handled by upper layers when
|
# Raise DiskFileNoSpace to be handled by upper layers when
|
626
swiftonhpss/swift/obj/server.py
Normal file
626
swiftonhpss/swift/obj/server.py
Normal file
@ -0,0 +1,626 @@
|
|||||||
|
# Copyright (c) 2012-2014 Red Hat, Inc.
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
# implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
""" Object Server for Gluster for Swift """
|
||||||
|
|
||||||
|
import math
|
||||||
|
import logging
|
||||||
|
import time
|
||||||
|
import xattr
|
||||||
|
import os
|
||||||
|
import hpssfs
|
||||||
|
from hashlib import md5
|
||||||
|
from swift.common.swob import HTTPConflict, HTTPBadRequest, HeaderKeyDict, \
|
||||||
|
HTTPInsufficientStorage, HTTPPreconditionFailed, HTTPRequestTimeout, \
|
||||||
|
HTTPClientDisconnect, HTTPUnprocessableEntity, HTTPNotImplemented, \
|
||||||
|
HTTPServiceUnavailable, HTTPCreated, HTTPNotFound, HTTPAccepted, \
|
||||||
|
HTTPNoContent, Request, Response
|
||||||
|
from swift.common.utils import public, timing_stats, replication, \
|
||||||
|
config_true_value, Timestamp, csv_append
|
||||||
|
from swift.common.request_helpers import get_name_and_placement, \
|
||||||
|
split_and_validate_path, is_sys_or_user_meta
|
||||||
|
from swiftonhpss.swift.common.exceptions import AlreadyExistsAsFile, \
|
||||||
|
AlreadyExistsAsDir, SwiftOnFileSystemIOError, SwiftOnFileSystemOSError, \
|
||||||
|
SwiftOnFileFsException
|
||||||
|
from swift.common.exceptions import DiskFileDeviceUnavailable, \
|
||||||
|
DiskFileNotExist, DiskFileQuarantined, ChunkReadTimeout, DiskFileNoSpace, \
|
||||||
|
DiskFileXattrNotSupported, DiskFileExpired, DiskFileDeleted
|
||||||
|
from swift.common.constraints import valid_timestamp, check_account_format, \
|
||||||
|
check_destination_header
|
||||||
|
|
||||||
|
from swift.obj import server
|
||||||
|
|
||||||
|
from swiftonhpss.swift.obj.diskfile import DiskFileManager
|
||||||
|
from swiftonhpss.swift.common.constraints import check_object_creation
|
||||||
|
from swiftonhpss.swift.common import utils
|
||||||
|
|
||||||
|
|
||||||
|
class SwiftOnFileDiskFileRouter(object):
|
||||||
|
"""
|
||||||
|
Replacement for Swift's DiskFileRouter object.
|
||||||
|
Always returns SwiftOnFile's DiskFileManager implementation.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, *args, **kwargs):
|
||||||
|
self.manager_cls = DiskFileManager(*args, **kwargs)
|
||||||
|
|
||||||
|
def __getitem__(self, policy):
|
||||||
|
return self.manager_cls
|
||||||
|
|
||||||
|
|
||||||
|
class ObjectController(server.ObjectController):
|
||||||
|
"""
|
||||||
|
Subclass of the object server's ObjectController that supports HPSS-specific
|
||||||
|
metadata headers and operations (such as COS assignment and purge locking).
|
||||||
|
"""
|
||||||
|
|
||||||
|
def setup(self, conf):
|
||||||
|
"""
|
||||||
|
Implementation specific setup. This method is called at the very end
|
||||||
|
by the constructor to allow a specific implementation to modify
|
||||||
|
existing attributes or add its own attributes.
|
||||||
|
|
||||||
|
:param conf: WSGI configuration parameter
|
||||||
|
"""
|
||||||
|
# Replaces Swift's DiskFileRouter object reference with ours.
|
||||||
|
self._diskfile_router = SwiftOnFileDiskFileRouter(conf, self.logger)
|
||||||
|
# This conf option will be deprecated and eventualy removed in
|
||||||
|
# future releases
|
||||||
|
utils.read_pickled_metadata = \
|
||||||
|
config_true_value(conf.get('read_pickled_metadata', 'no'))
|
||||||
|
|
||||||
|
@public
|
||||||
|
@timing_stats()
|
||||||
|
def PUT(self, request):
|
||||||
|
"""Handle HTTP PUT requests for the Swift on File object server"""
|
||||||
|
try:
|
||||||
|
device, partition, account, container, obj, policy = \
|
||||||
|
get_name_and_placement(request, 5, 5, True)
|
||||||
|
|
||||||
|
req_timestamp = valid_timestamp(request)
|
||||||
|
|
||||||
|
# check swiftonhpss constraints first
|
||||||
|
error_response = check_object_creation(request, obj)
|
||||||
|
if error_response:
|
||||||
|
return error_response
|
||||||
|
|
||||||
|
# (HPSS) Shameless copy-paste from ObjectController.PUT and
|
||||||
|
# modification, because we have to do certain things like pass in
|
||||||
|
# purgelock and class-of-service information that Swift won't know
|
||||||
|
# to do and need to do it in a very specific order.
|
||||||
|
new_delete_at = int(request.headers.get('X-Delete-At') or 0)
|
||||||
|
if new_delete_at and new_delete_at < time.time():
|
||||||
|
return HTTPBadRequest(body='X-Delete-At in past',
|
||||||
|
request=request,
|
||||||
|
context_type='text/plain')
|
||||||
|
|
||||||
|
try:
|
||||||
|
fsize = request.message_length()
|
||||||
|
except ValueError as e:
|
||||||
|
return HTTPBadRequest(body=str(e),
|
||||||
|
request=request,
|
||||||
|
content_type='text/plain')
|
||||||
|
|
||||||
|
# Try to get DiskFile
|
||||||
|
try:
|
||||||
|
disk_file = self.get_diskfile(device, partition, account,
|
||||||
|
container, obj, policy=policy)
|
||||||
|
except DiskFileDeviceUnavailable:
|
||||||
|
return HTTPInsufficientStorage(drive=device, request=request)
|
||||||
|
|
||||||
|
try:
|
||||||
|
orig_metadata = disk_file.read_metadata()
|
||||||
|
except (DiskFileNotExist, DiskFileQuarantined):
|
||||||
|
orig_metadata = {}
|
||||||
|
|
||||||
|
# Check for If-None-Match in request
|
||||||
|
if request.if_none_match and orig_metadata:
|
||||||
|
if '*' in request.if_none_match:
|
||||||
|
# File exists already, return 412
|
||||||
|
return HTTPPreconditionFailed(request=request)
|
||||||
|
if orig_metadata.get('ETag') in request.if_none_match:
|
||||||
|
# The current ETag matches, return 412
|
||||||
|
return HTTPPreconditionFailed(request=request)
|
||||||
|
|
||||||
|
orig_timestamp = Timestamp(orig_metadata.get('X-Timestamp', 0))
|
||||||
|
|
||||||
|
if orig_timestamp >= req_timestamp:
|
||||||
|
return HTTPConflict(
|
||||||
|
request=request,
|
||||||
|
headers={'X-Backend-Timestamp': orig_timestamp.internal})
|
||||||
|
orig_delete_at = int(orig_metadata.get('X-Delete-At') or 0)
|
||||||
|
upload_expiration = time.time() + self.max_upload_time
|
||||||
|
|
||||||
|
etag = md5()
|
||||||
|
elapsed_time = 0
|
||||||
|
|
||||||
|
# (HPSS) Check for HPSS-specific metadata headers
|
||||||
|
cos = request.headers.get('X-Object-Meta-COS')
|
||||||
|
purgelock = request.headers.get('X-Object-Meta-PurgeLock')
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Feed DiskFile our HPSS-specific stuff
|
||||||
|
with disk_file.create(size=fsize, cos=cos) as writer:
|
||||||
|
upload_size = 0
|
||||||
|
# FIXME: Need to figure out how to store MIME type
|
||||||
|
# information, to retrieve with a GET later! Or if
|
||||||
|
# this has already been done for us.
|
||||||
|
|
||||||
|
def timeout_reader():
|
||||||
|
with ChunkReadTimeout(self.client_timeout):
|
||||||
|
return request.environ['wsgi.input'].read(
|
||||||
|
self.network_chunk_size)
|
||||||
|
|
||||||
|
try:
|
||||||
|
for chunk in iter(lambda: timeout_reader(), ''):
|
||||||
|
start_time = time.time()
|
||||||
|
if start_time > upload_expiration:
|
||||||
|
self.logger.increment('PUT.timeouts')
|
||||||
|
return HTTPRequestTimeout(request=request)
|
||||||
|
etag.update(chunk)
|
||||||
|
upload_size = writer.write(chunk)
|
||||||
|
elapsed_time += time.time() - start_time
|
||||||
|
except ChunkReadTimeout:
|
||||||
|
return HTTPRequestTimeout(request=request)
|
||||||
|
if upload_size:
|
||||||
|
self.logger.transfer_rate('PUT.%s.timing' % device,
|
||||||
|
elapsed_time, upload_size)
|
||||||
|
if fsize and fsize != upload_size:
|
||||||
|
return HTTPClientDisconnect(request=request)
|
||||||
|
etag = etag.hexdigest()
|
||||||
|
if 'etag' in request.headers \
|
||||||
|
and request.headers['etag'].lower() != etag:
|
||||||
|
return HTTPUnprocessableEntity(request=request)
|
||||||
|
|
||||||
|
# Update object metadata
|
||||||
|
metadata = {'X-Timestamp': request.timestamp.internal,
|
||||||
|
'Content-Type': request.headers['content-type'],
|
||||||
|
'ETag': etag,
|
||||||
|
'Content-Length': str(upload_size),
|
||||||
|
}
|
||||||
|
metadata.update(
|
||||||
|
val for val in request.headers.iteritems()
|
||||||
|
if is_sys_or_user_meta('object', val[0]))
|
||||||
|
backend_headers = \
|
||||||
|
request.headers.get('X-Backend-Replication-Headers')
|
||||||
|
for header_key in (backend_headers or self.allowed_headers):
|
||||||
|
if header_key in request.headers:
|
||||||
|
header_caps = header_key.title()
|
||||||
|
metadata[header_caps] = request.headers[header_key]
|
||||||
|
|
||||||
|
# (HPSS) Purge lock the file
|
||||||
|
writer.put(metadata, purgelock=purgelock)
|
||||||
|
|
||||||
|
except DiskFileNoSpace:
|
||||||
|
return HTTPInsufficientStorage(drive=device, request=request)
|
||||||
|
except SwiftOnFileSystemIOError:
|
||||||
|
return HTTPServiceUnavailable(request=request)
|
||||||
|
|
||||||
|
# (HPSS) Set checksum on file
|
||||||
|
try:
|
||||||
|
xattr.setxattr(disk_file._data_file, 'system.hpss.hash',
|
||||||
|
"md5:%s" % etag)
|
||||||
|
except IOError:
|
||||||
|
logging.exception("Error setting HPSS E2EDI checksum in "
|
||||||
|
"system.hpss.hash, storing ETag in "
|
||||||
|
"user.hash.checksum\n")
|
||||||
|
try:
|
||||||
|
xattr.setxattr(disk_file._data_file,
|
||||||
|
'user.hash.checksum', etag)
|
||||||
|
xattr.setxattr(disk_file._data_file,
|
||||||
|
'user.hash.algorithm', 'md5')
|
||||||
|
xattr.setxattr(disk_file._data_file,
|
||||||
|
'user.hash.state', 'Valid')
|
||||||
|
xattr.setxattr(disk_file._data_file,
|
||||||
|
'user.hash.filesize', str(upload_size))
|
||||||
|
except IOError as err:
|
||||||
|
raise SwiftOnFileSystemIOError(
|
||||||
|
err.errno, '%s, xattr.setxattr(...)' % err.strerror)
|
||||||
|
|
||||||
|
# Update container metadata
|
||||||
|
if orig_delete_at != new_delete_at:
|
||||||
|
if new_delete_at:
|
||||||
|
self.delete_at_update('PUT', new_delete_at, account,
|
||||||
|
container, obj, request, device,
|
||||||
|
policy)
|
||||||
|
if orig_delete_at:
|
||||||
|
self.delete_at_update('DELETE', orig_delete_at, account,
|
||||||
|
container, obj, request, device,
|
||||||
|
policy)
|
||||||
|
self.container_update('PUT', account, container, obj, request,
|
||||||
|
HeaderKeyDict(
|
||||||
|
{'x-size':
|
||||||
|
metadata['Content-Length'],
|
||||||
|
'x-content-type':
|
||||||
|
metadata['Content-Type'],
|
||||||
|
'x-timestamp':
|
||||||
|
metadata['X-Timestamp'],
|
||||||
|
'x-etag':
|
||||||
|
metadata['ETag']}),
|
||||||
|
device, policy)
|
||||||
|
# Create convenience symlink
|
||||||
|
try:
|
||||||
|
self.object_symlink(request, disk_file._data_file, device,
|
||||||
|
account)
|
||||||
|
except SwiftOnFileSystemOSError:
|
||||||
|
return HTTPServiceUnavailable(request=request)
|
||||||
|
return HTTPCreated(request=request, etag=etag)
|
||||||
|
|
||||||
|
except (AlreadyExistsAsFile, AlreadyExistsAsDir):
|
||||||
|
device = \
|
||||||
|
split_and_validate_path(request, 1, 5, True)
|
||||||
|
return HTTPConflict(drive=device, request=request)
|
||||||
|
|
||||||
|
def object_symlink(self, request, diskfile, device, account):
|
||||||
|
mount = diskfile.split(device)[0]
|
||||||
|
dev = "%s%s" % (mount, device)
|
||||||
|
project = None
|
||||||
|
if 'X-Project-Name' in request.headers:
|
||||||
|
project = request.headers.get('X-Project-Name')
|
||||||
|
elif 'X-Tenant-Name' in request.headers:
|
||||||
|
project = request.headers.get('X-Tenant-Name')
|
||||||
|
if project:
|
||||||
|
if project is not account:
|
||||||
|
accdir = "%s/%s" % (dev, account)
|
||||||
|
projdir = "%s%s" % (mount, project)
|
||||||
|
if not os.path.exists(projdir):
|
||||||
|
try:
|
||||||
|
os.symlink(accdir, projdir)
|
||||||
|
except OSError as err:
|
||||||
|
raise SwiftOnFileSystemOSError(
|
||||||
|
err.errno,
|
||||||
|
('%s, os.symlink("%s", ...)' %
|
||||||
|
err.strerror, account))
|
||||||
|
|
||||||
|
@public
|
||||||
|
@timing_stats()
|
||||||
|
def HEAD(self, request):
|
||||||
|
"""Handle HTTP HEAD requests for the Swift on File object server"""
|
||||||
|
device, partition, account, container, obj, policy = \
|
||||||
|
get_name_and_placement(request, 5, 5, True)
|
||||||
|
|
||||||
|
# Get DiskFile
|
||||||
|
try:
|
||||||
|
disk_file = self.get_diskfile(device, partition, account, container,
|
||||||
|
obj, policy=policy)
|
||||||
|
except DiskFileDeviceUnavailable:
|
||||||
|
return HTTPInsufficientStorage(drive=device, request=request)
|
||||||
|
|
||||||
|
# Read DiskFile metadata
|
||||||
|
try:
|
||||||
|
metadata = disk_file.read_metadata()
|
||||||
|
except (DiskFileNotExist, DiskFileQuarantined) as e:
|
||||||
|
headers = {}
|
||||||
|
if hasattr(e, 'timestamp'):
|
||||||
|
headers['X-Backend-Timestamp'] = e.timestamp.internal
|
||||||
|
return HTTPNotFound(request=request, headers=headers,
|
||||||
|
conditional_respose=True)
|
||||||
|
|
||||||
|
# Create and populate our response
|
||||||
|
response = Response(request=request, conditional_response=True)
|
||||||
|
response.headers['Content-Type'] = \
|
||||||
|
metadata.get('Content-Type', 'application/octet-stream')
|
||||||
|
for key, value in metadata.iteritems():
|
||||||
|
if is_sys_or_user_meta('object', key) or key.lower() in \
|
||||||
|
self.allowed_headers:
|
||||||
|
response.headers[key] = value
|
||||||
|
response.etag = metadata['ETag']
|
||||||
|
ts = Timestamp(metadata['X-Timestamp'])
|
||||||
|
|
||||||
|
# Needed for container sync feature
|
||||||
|
response.headers['X-Timestamp'] = ts.normal
|
||||||
|
response.headers['X-Backend-Timestamp'] = ts.internal
|
||||||
|
response.content_length = int(metadata['Content-Length'])
|
||||||
|
try:
|
||||||
|
response.content_encoding = metadata['Content-Encoding']
|
||||||
|
except KeyError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
try:
|
||||||
|
self.get_hpss_xattr(request, response, disk_file)
|
||||||
|
except SwiftOnFileSystemIOError:
|
||||||
|
return HTTPServiceUnavailable(request=request)
|
||||||
|
|
||||||
|
# Bill Owen's hack to force container sync on HEAD, so we can manually
|
||||||
|
# tell the Swift container server when objects exist on disk it didn't
|
||||||
|
# know about.
|
||||||
|
# TODO: do a similar trick for HEADing objects that didn't exist
|
||||||
|
# TODO: see if this block that's duplicated can be a function instead
|
||||||
|
if 'X-Object-Sysmeta-Update-Container' in response.headers:
|
||||||
|
self.container_update(
|
||||||
|
'PUT', account, container, obj, request,
|
||||||
|
HeaderKeyDict(
|
||||||
|
{'x-size': metadata['Content-Length'],
|
||||||
|
'x-content-type': metadata['Content-Type'],
|
||||||
|
'x-timestamp': metadata['X-Timestamp'],
|
||||||
|
'x-etag': metadata['ETag']
|
||||||
|
}
|
||||||
|
),
|
||||||
|
device, policy)
|
||||||
|
response.headers.pop('X-Object-Sysmeta-Update-Container')
|
||||||
|
|
||||||
|
return response
|
||||||
|
|
||||||
|
@public
|
||||||
|
@timing_stats()
|
||||||
|
def GET(self, request):
|
||||||
|
"""Handle HTTP GET requests for the Swift on File object server"""
|
||||||
|
device, partition, account, container, obj, policy = \
|
||||||
|
get_name_and_placement(request, 5, 5, True)
|
||||||
|
keep_cache = self.keep_cache_private or (
|
||||||
|
'X-Auth-Token' not in request.headers and
|
||||||
|
'X-Storage-Token' not in request.headers
|
||||||
|
)
|
||||||
|
|
||||||
|
# Get Diskfile
|
||||||
|
try:
|
||||||
|
disk_file = self.get_diskfile(device, partition, account, container,
|
||||||
|
obj, policy)
|
||||||
|
except DiskFileDeviceUnavailable:
|
||||||
|
return HTTPInsufficientStorage(drive=device, request=request)
|
||||||
|
|
||||||
|
# Get metadata and append it to response
|
||||||
|
try:
|
||||||
|
with disk_file.open():
|
||||||
|
metadata = disk_file.get_metadata()
|
||||||
|
obj_size = int(metadata['Content-Length'])
|
||||||
|
file_x_ts = Timestamp(metadata['X-Timestamp'])
|
||||||
|
try:
|
||||||
|
# (HPSS) Our file could end up being on an offline
|
||||||
|
# tape, so we need to check for it and return an
|
||||||
|
# HTTP 'accepted, but still processing' response.
|
||||||
|
if self.is_offline(disk_file._data_file, request):
|
||||||
|
return HTTPAccepted(request=request)
|
||||||
|
except (SwiftOnFileSystemIOError, SwiftOnFileFsException):
|
||||||
|
return HTTPServiceUnavailable(request=request)
|
||||||
|
|
||||||
|
response = Response(
|
||||||
|
app_iter=disk_file.reader(keep_cache=keep_cache),
|
||||||
|
request=request, conditional_response=True
|
||||||
|
)
|
||||||
|
response.headers['Content-Type'] = metadata.get(
|
||||||
|
'Content-Type', 'application/octet-stream'
|
||||||
|
)
|
||||||
|
for key, value in metadata.iteritems():
|
||||||
|
if is_sys_or_user_meta('object', key) or \
|
||||||
|
key.lower() in self.allowed_headers:
|
||||||
|
response.headers[key] = value
|
||||||
|
response.etag = metadata['ETag']
|
||||||
|
response.last_modified = math.ceil(float(file_x_ts))
|
||||||
|
response.content_length = obj_size
|
||||||
|
try:
|
||||||
|
response.content_encoding = metadata['Content-Encoding']
|
||||||
|
except KeyError:
|
||||||
|
pass
|
||||||
|
response.headers['X-Timestamp'] = file_x_ts.normal
|
||||||
|
response.headers['X-Backend-Timestamp'] = file_x_ts.internal
|
||||||
|
# (HPSS) Inject HPSS xattr metadata into headers
|
||||||
|
try:
|
||||||
|
self.get_hpss_xattr(request, response, disk_file)
|
||||||
|
except SwiftOnFileSystemIOError:
|
||||||
|
return HTTPServiceUnavailable(request=request)
|
||||||
|
return request.get_response(response)
|
||||||
|
except (DiskFileNotExist, DiskFileQuarantined) as e:
|
||||||
|
headers = {}
|
||||||
|
if hasattr(e, 'timestamp'):
|
||||||
|
headers['X-Backend-Timestamp'] = e.timestamp.internal
|
||||||
|
return HTTPNotFound(request=request, headers=headers,
|
||||||
|
conditional_response=True)
|
||||||
|
|
||||||
|
# TODO: refactor this to live in DiskFile!
|
||||||
|
# Along with all the other HPSS stuff
|
||||||
|
def get_hpss_xattr(self, request, response, diskfile):
|
||||||
|
attrlist = {'X-HPSS-Account': 'account',
|
||||||
|
'X-HPSS-BitfileID': 'bitfile',
|
||||||
|
'X-HPSS-Comment': 'comment',
|
||||||
|
'X-HPSS-ClassOfServiceID': 'cos',
|
||||||
|
'X-HPSS-FamilyID': 'family',
|
||||||
|
'X-HPSS-FilesetID': 'fileset',
|
||||||
|
'X-HPSS-Bytes': 'level',
|
||||||
|
'X-HPSS-Reads': 'reads',
|
||||||
|
'X-HPSS-RealmID': 'realm',
|
||||||
|
'X-HPSS-SubsysID': 'subsys',
|
||||||
|
'X-HPSS-Writes': 'writes',
|
||||||
|
'X-HPSS-OptimumSize': 'optimum',
|
||||||
|
'X-HPSS-Hash': 'hash',
|
||||||
|
'X-HPSS-PurgelockStatus': 'purgelock'}
|
||||||
|
for key in request.headers:
|
||||||
|
val = attrlist.get(key, None)
|
||||||
|
if val:
|
||||||
|
attr = 'system.hpss.%s' % val
|
||||||
|
try:
|
||||||
|
response.headers[key] = \
|
||||||
|
xattr.getxattr(diskfile._data_file, attr)
|
||||||
|
except IOError as err:
|
||||||
|
raise SwiftOnFileSystemIOError(
|
||||||
|
err.errno,
|
||||||
|
'%s, xattr.getxattr("%s", ...)' % (err.strerror, attr)
|
||||||
|
)
|
||||||
|
|
||||||
|
# TODO: move this to DiskFile
|
||||||
|
# TODO: make it more obvious how we're parsing the level xattr
|
||||||
|
def is_offline(self, path, request):
|
||||||
|
try:
|
||||||
|
byteslevel = xattr.getxattr(path, "system.hpss.level")
|
||||||
|
except IOError as err:
|
||||||
|
raise SwiftOnFileSystemIOError(
|
||||||
|
err.errno,
|
||||||
|
'%s, xattr.getxattr("system.hpss.level", ...)' % err.strerror
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
byteslevelstring = byteslevel.split(";")
|
||||||
|
bytesfirstlevel = byteslevelstring[0].split(':')
|
||||||
|
bytesfile = bytesfirstlevel[2].rstrip(' ')
|
||||||
|
except ValueError:
|
||||||
|
raise SwiftOnFileFsException("Couldn't get system.hpss.level!")
|
||||||
|
setbytes = set(str(bytesfile))
|
||||||
|
setsize = set(str(os.stat(path).st_size))
|
||||||
|
return setbytes != setsize
|
||||||
|
|
||||||
|
@public
|
||||||
|
@timing_stats()
|
||||||
|
def POST(self, request):
|
||||||
|
"""Handle HTTP POST requests for the Swift on File object server"""
|
||||||
|
device, partition, account, container, obj, policy = \
|
||||||
|
get_name_and_placement(request, 5, 5, True)
|
||||||
|
req_timestamp = valid_timestamp(request)
|
||||||
|
new_delete_at = int(request.headers.get('X-Delete-At') or 0)
|
||||||
|
if new_delete_at and new_delete_at < time.time():
|
||||||
|
return HTTPBadRequest(body='X-Delete-At in past', request=request,
|
||||||
|
content_type='text/plain')
|
||||||
|
|
||||||
|
# Get DiskFile
|
||||||
|
try:
|
||||||
|
disk_file = self.get_diskfile(device, partition, account,
|
||||||
|
container, obj, policy)
|
||||||
|
except DiskFileDeviceUnavailable:
|
||||||
|
return HTTPInsufficientStorage(drive=device, request=request)
|
||||||
|
|
||||||
|
# Set Purgelock status if we got it
|
||||||
|
purgelock = request.headers.get('X-Object-Meta-PurgeLock')
|
||||||
|
if purgelock:
|
||||||
|
try:
|
||||||
|
hpssfs.ioctl(disk_file._fd, hpssfs.HPSSFS_PURGE_LOCK,
|
||||||
|
int(purgelock))
|
||||||
|
except IOError as err:
|
||||||
|
raise SwiftOnFileSystemIOError(
|
||||||
|
err.errno,
|
||||||
|
'%s, xattr.getxattr("%s", ...)' %
|
||||||
|
(err.strerror, disk_file._fd))
|
||||||
|
|
||||||
|
# Update metadata from request
|
||||||
|
try:
|
||||||
|
orig_metadata = disk_file.read_metadata()
|
||||||
|
except (DiskFileNotExist, DiskFileQuarantined):
|
||||||
|
return HTTPNotFound(request=request)
|
||||||
|
orig_timestamp = Timestamp(orig_metadata.get('X-Timestamp', 0))
|
||||||
|
if orig_timestamp >= req_timestamp:
|
||||||
|
return HTTPConflict(request=request,
|
||||||
|
headers={
|
||||||
|
'X-Backend-Timestamp': orig_timestamp.internal
|
||||||
|
})
|
||||||
|
metadata = {'X-Timestamp': req_timestamp.internal}
|
||||||
|
metadata.update(val for val in request.headers.iteritems()
|
||||||
|
if is_user_meta('object', val[0]))
|
||||||
|
for header_key in self.allowed_headers:
|
||||||
|
if header_key in request.headers:
|
||||||
|
header_caps = header_key.title()
|
||||||
|
metadata[header_caps] = request.headers[header_key]
|
||||||
|
orig_delete_at = int(orig_metadata.get('X-Delete-At') or 0)
|
||||||
|
if orig_delete_at != new_delete_at:
|
||||||
|
if new_delete_at:
|
||||||
|
self.delete_at_update('PUT', new_delete_at, account,
|
||||||
|
container, obj, request, device, policy)
|
||||||
|
if orig_delete_at:
|
||||||
|
self.delete_at_update('DELETE', orig_delete_at, account,
|
||||||
|
container, obj, request, device, policy)
|
||||||
|
disk_file.write_metadata(metadata)
|
||||||
|
return HTTPAccepted(request=request)
|
||||||
|
|
||||||
|
@public
|
||||||
|
@timing_stats()
|
||||||
|
def DELETE(self, request):
|
||||||
|
"""Handle HTTP DELETE requests for the Swift on File object server"""
|
||||||
|
device, partition, account, container, obj, policy = \
|
||||||
|
get_name_and_placement(request, 5, 5, True)
|
||||||
|
req_timestamp = valid_timestamp(request)
|
||||||
|
try:
|
||||||
|
disk_file = self.get_diskfile(device, partition, account,
|
||||||
|
container, obj, policy)
|
||||||
|
except DiskFileDeviceUnavailable:
|
||||||
|
return HTTPInsufficientStorage(drive=device, request=request)
|
||||||
|
|
||||||
|
try:
|
||||||
|
orig_metadata = disk_file.read_metadata()
|
||||||
|
except DiskFileXattrNotSupported:
|
||||||
|
return HTTPInsufficientStorage(drive=device, request=request)
|
||||||
|
except DiskFileExpired as e:
|
||||||
|
orig_timestamp = e.timestamp
|
||||||
|
orig_metadata = e.metadata
|
||||||
|
response_class = HTTPNotFound
|
||||||
|
except DiskFileDeleted:
|
||||||
|
orig_timestamp = e.timestamp
|
||||||
|
orig_metadata = {}
|
||||||
|
response_class = HTTPNotFound
|
||||||
|
except (DiskFileNotExist, DiskFileQuarantined):
|
||||||
|
orig_timestamp = 0
|
||||||
|
orig_metadata = {}
|
||||||
|
response_class = HTTPNotFound
|
||||||
|
else:
|
||||||
|
orig_timestamp = Timestamp(orig_metadata.get('X-Timestamp', 0))
|
||||||
|
if orig_timestamp < req_timestamp:
|
||||||
|
response_class = HTTPNoContent
|
||||||
|
else:
|
||||||
|
response_class = HTTPConflict
|
||||||
|
response_timestamp = max(orig_timestamp, req_timestamp)
|
||||||
|
orig_delete_at = int(orig_metadata.get('X-Delete-At') or 0)
|
||||||
|
try:
|
||||||
|
req_if_delete_at = int(request.headers['X-If-Delete-At'])
|
||||||
|
except KeyError:
|
||||||
|
pass
|
||||||
|
except ValueError:
|
||||||
|
return HTTPBadRequest(request=request,
|
||||||
|
body='Bad X-If-Delete-At header value')
|
||||||
|
else:
|
||||||
|
if not orig_timestamp:
|
||||||
|
return HTTPNotFound()
|
||||||
|
if orig_delete_at != req_if_delete_at:
|
||||||
|
return HTTPPreconditionFailed(
|
||||||
|
request=request,
|
||||||
|
body='X-If-Delete-At and X-Delete-At do not match')
|
||||||
|
else:
|
||||||
|
response_class = HTTPNoContent
|
||||||
|
if orig_delete_at:
|
||||||
|
self.delete_at_update('DELETE', orig_delete_at, account,
|
||||||
|
container, obj, request, device, policy)
|
||||||
|
if orig_timestamp < req_timestamp:
|
||||||
|
disk_file.delete(req_timestamp)
|
||||||
|
self.container_update('DELETE', account, container, obj, request,
|
||||||
|
HeaderKeyDict(
|
||||||
|
{'x-timestamp': req_timestamp.internal}
|
||||||
|
), device, policy)
|
||||||
|
return response_class(
|
||||||
|
request=request,
|
||||||
|
headers={'X-Backend-Timestamp': response_timestamp.internal}
|
||||||
|
)
|
||||||
|
|
||||||
|
@public
|
||||||
|
@replication
|
||||||
|
@timing_stats(sample_rate=0.1)
|
||||||
|
def REPLICATE(self, request):
|
||||||
|
"""
|
||||||
|
In Swift, this method handles REPLICATE requests for the Swift
|
||||||
|
Object Server. This is used by the object replicator to get hashes
|
||||||
|
for directories.
|
||||||
|
|
||||||
|
Swiftonfile does not support this as it expects the underlying
|
||||||
|
filesystem to take care of replication. Also, swiftonhpss has no
|
||||||
|
notion of hashes for directories.
|
||||||
|
"""
|
||||||
|
return HTTPNotImplemented(request=request)
|
||||||
|
|
||||||
|
@public
|
||||||
|
@replication
|
||||||
|
@timing_stats(sample_rate=0.1)
|
||||||
|
def REPLICATION(self, request):
|
||||||
|
return HTTPNotImplemented(request=request)
|
||||||
|
|
||||||
|
|
||||||
|
def app_factory(global_conf, **local_conf):
|
||||||
|
"""paste.deploy app factory for creating WSGI object server apps"""
|
||||||
|
conf = global_conf.copy()
|
||||||
|
conf.update(local_conf)
|
||||||
|
return ObjectController(conf)
|
@ -33,12 +33,12 @@ class TestSwiftOnFileEnv:
|
|||||||
cls.conn.authenticate()
|
cls.conn.authenticate()
|
||||||
cls.account = Account(cls.conn, tf.config.get('account',
|
cls.account = Account(cls.conn, tf.config.get('account',
|
||||||
tf.config['username']))
|
tf.config['username']))
|
||||||
cls.root_dir = os.path.join('/mnt/swiftonfile/test')
|
cls.root_dir = os.path.join('/mnt/swiftonhpss/test')
|
||||||
cls.account.delete_containers()
|
cls.account.delete_containers()
|
||||||
|
|
||||||
cls.file_size = 8
|
cls.file_size = 8
|
||||||
cls.container = cls.account.container(Utils.create_name())
|
cls.container = cls.account.container(Utils.create_name())
|
||||||
if not cls.container.create():
|
if not cls.container.create(None, None):
|
||||||
raise ResponseError(cls.conn.response)
|
raise ResponseError(cls.conn.response)
|
||||||
|
|
||||||
cls.dirs = [
|
cls.dirs = [
|
||||||
|
@ -101,7 +101,7 @@ class TestAccountEnv(object):
|
|||||||
cls.containers = []
|
cls.containers = []
|
||||||
for i in range(10):
|
for i in range(10):
|
||||||
cont = cls.account.container(Utils.create_name())
|
cont = cls.account.container(Utils.create_name())
|
||||||
if not cont.create():
|
if not cont.create(None, None):
|
||||||
raise ResponseError(cls.conn.response)
|
raise ResponseError(cls.conn.response)
|
||||||
|
|
||||||
cls.containers.append(cont)
|
cls.containers.append(cont)
|
||||||
@ -132,7 +132,7 @@ class TestAccount(Base):
|
|||||||
def testInvalidUTF8Path(self):
|
def testInvalidUTF8Path(self):
|
||||||
invalid_utf8 = Utils.create_utf8_name()[::-1]
|
invalid_utf8 = Utils.create_utf8_name()[::-1]
|
||||||
container = self.env.account.container(invalid_utf8)
|
container = self.env.account.container(invalid_utf8)
|
||||||
self.assert_(not container.create(cfg={'no_path_quote': True}))
|
self.assert_(not container.create(None, None))
|
||||||
self.assert_status(412)
|
self.assert_status(412)
|
||||||
self.assert_body('Invalid UTF8 or contains NULL')
|
self.assert_body('Invalid UTF8 or contains NULL')
|
||||||
|
|
||||||
@ -337,7 +337,7 @@ class TestContainerEnv(object):
|
|||||||
cls.account.delete_containers()
|
cls.account.delete_containers()
|
||||||
|
|
||||||
cls.container = cls.account.container(Utils.create_name())
|
cls.container = cls.account.container(Utils.create_name())
|
||||||
if not cls.container.create():
|
if not cls.container.create(None, None):
|
||||||
raise ResponseError(cls.conn.response)
|
raise ResponseError(cls.conn.response)
|
||||||
|
|
||||||
cls.file_count = 10
|
cls.file_count = 10
|
||||||
@ -372,15 +372,15 @@ class TestContainer(Base):
|
|||||||
limit + 1, limit + 10, limit + 100):
|
limit + 1, limit + 10, limit + 100):
|
||||||
cont = self.env.account.container('a' * l)
|
cont = self.env.account.container('a' * l)
|
||||||
if l <= limit:
|
if l <= limit:
|
||||||
self.assert_(cont.create())
|
self.assert_(cont.create(None, None))
|
||||||
self.assert_status(201)
|
self.assert_status(201)
|
||||||
else:
|
else:
|
||||||
self.assert_(not cont.create())
|
self.assert_(not cont.create(None, None))
|
||||||
self.assert_status(400)
|
self.assert_status(400)
|
||||||
|
|
||||||
def testFileThenContainerDelete(self):
|
def testFileThenContainerDelete(self):
|
||||||
cont = self.env.account.container(Utils.create_name())
|
cont = self.env.account.container(Utils.create_name())
|
||||||
self.assert_(cont.create())
|
self.assert_(cont.create(None, None))
|
||||||
file_item = cont.file(Utils.create_name())
|
file_item = cont.file(Utils.create_name())
|
||||||
self.assert_(file_item.write_random())
|
self.assert_(file_item.write_random())
|
||||||
|
|
||||||
@ -394,7 +394,7 @@ class TestContainer(Base):
|
|||||||
|
|
||||||
def testFileListingLimitMarkerPrefix(self):
|
def testFileListingLimitMarkerPrefix(self):
|
||||||
cont = self.env.account.container(Utils.create_name())
|
cont = self.env.account.container(Utils.create_name())
|
||||||
self.assert_(cont.create())
|
self.assert_(cont.create(None, None))
|
||||||
|
|
||||||
files = sorted([Utils.create_name() for x in xrange(10)])
|
files = sorted([Utils.create_name() for x in xrange(10)])
|
||||||
for f in files:
|
for f in files:
|
||||||
@ -413,7 +413,7 @@ class TestContainer(Base):
|
|||||||
def testPrefixAndLimit(self):
|
def testPrefixAndLimit(self):
|
||||||
load_constraint('container_listing_limit')
|
load_constraint('container_listing_limit')
|
||||||
cont = self.env.account.container(Utils.create_name())
|
cont = self.env.account.container(Utils.create_name())
|
||||||
self.assert_(cont.create())
|
self.assert_(cont.create(None, None))
|
||||||
|
|
||||||
prefix_file_count = 10
|
prefix_file_count = 10
|
||||||
limit_count = 2
|
limit_count = 2
|
||||||
@ -444,7 +444,7 @@ class TestContainer(Base):
|
|||||||
|
|
||||||
def testCreate(self):
|
def testCreate(self):
|
||||||
cont = self.env.account.container(Utils.create_name())
|
cont = self.env.account.container(Utils.create_name())
|
||||||
self.assert_(cont.create())
|
self.assert_(cont.create(None, None))
|
||||||
self.assert_status(201)
|
self.assert_status(201)
|
||||||
self.assert_(cont.name in self.env.account.containers())
|
self.assert_(cont.name in self.env.account.containers())
|
||||||
|
|
||||||
@ -459,13 +459,13 @@ class TestContainer(Base):
|
|||||||
valid_utf8 = Utils.create_utf8_name()
|
valid_utf8 = Utils.create_utf8_name()
|
||||||
invalid_utf8 = valid_utf8[::-1]
|
invalid_utf8 = valid_utf8[::-1]
|
||||||
container = self.env.account.container(valid_utf8)
|
container = self.env.account.container(valid_utf8)
|
||||||
self.assert_(container.create(cfg={'no_path_quote': True}))
|
self.assert_(container.create(None, None))
|
||||||
self.assert_(container.name in self.env.account.containers())
|
self.assert_(container.name in self.env.account.containers())
|
||||||
self.assertEqual(container.files(), [])
|
self.assertEqual(container.files(), [])
|
||||||
self.assert_(container.delete())
|
self.assert_(container.delete())
|
||||||
|
|
||||||
container = self.env.account.container(invalid_utf8)
|
container = self.env.account.container(invalid_utf8)
|
||||||
self.assert_(not container.create(cfg={'no_path_quote': True}))
|
self.assert_(not container.create(None, None))
|
||||||
self.assert_status(412)
|
self.assert_status(412)
|
||||||
self.assertRaises(ResponseError, container.files,
|
self.assertRaises(ResponseError, container.files,
|
||||||
cfg={'no_path_quote': True})
|
cfg={'no_path_quote': True})
|
||||||
@ -473,9 +473,9 @@ class TestContainer(Base):
|
|||||||
|
|
||||||
def testCreateOnExisting(self):
|
def testCreateOnExisting(self):
|
||||||
cont = self.env.account.container(Utils.create_name())
|
cont = self.env.account.container(Utils.create_name())
|
||||||
self.assert_(cont.create())
|
self.assert_(cont.create(None, None))
|
||||||
self.assert_status(201)
|
self.assert_status(201)
|
||||||
self.assert_(cont.create())
|
self.assert_(cont.create(None, None))
|
||||||
self.assert_status(202)
|
self.assert_status(202)
|
||||||
|
|
||||||
def testSlashInName(self):
|
def testSlashInName(self):
|
||||||
@ -491,14 +491,14 @@ class TestContainer(Base):
|
|||||||
cont_name = cont_name.encode('utf-8')
|
cont_name = cont_name.encode('utf-8')
|
||||||
|
|
||||||
cont = self.env.account.container(cont_name)
|
cont = self.env.account.container(cont_name)
|
||||||
self.assert_(not cont.create(cfg={'no_path_quote': True}),
|
self.assert_(not cont.create(None, None),
|
||||||
'created container with name %s' % (cont_name))
|
'created container with name %s' % (cont_name))
|
||||||
self.assert_status(404)
|
self.assert_status(404)
|
||||||
self.assert_(cont.name not in self.env.account.containers())
|
self.assert_(cont.name not in self.env.account.containers())
|
||||||
|
|
||||||
def testDelete(self):
|
def testDelete(self):
|
||||||
cont = self.env.account.container(Utils.create_name())
|
cont = self.env.account.container(Utils.create_name())
|
||||||
self.assert_(cont.create())
|
self.assert_(cont.create(None, None))
|
||||||
self.assert_status(201)
|
self.assert_status(201)
|
||||||
self.assert_(cont.delete())
|
self.assert_(cont.delete())
|
||||||
self.assert_status(204)
|
self.assert_status(204)
|
||||||
@ -511,7 +511,7 @@ class TestContainer(Base):
|
|||||||
|
|
||||||
def testDeleteOnContainerWithFiles(self):
|
def testDeleteOnContainerWithFiles(self):
|
||||||
cont = self.env.account.container(Utils.create_name())
|
cont = self.env.account.container(Utils.create_name())
|
||||||
self.assert_(cont.create())
|
self.assert_(cont.create(None, None))
|
||||||
file_item = cont.file(Utils.create_name())
|
file_item = cont.file(Utils.create_name())
|
||||||
file_item.write_random(self.env.file_size)
|
file_item.write_random(self.env.file_size)
|
||||||
self.assert_(file_item.name in cont.files())
|
self.assert_(file_item.name in cont.files())
|
||||||
@ -600,19 +600,19 @@ class TestContainer(Base):
|
|||||||
|
|
||||||
def testTooLongName(self):
|
def testTooLongName(self):
|
||||||
cont = self.env.account.container('x' * 257)
|
cont = self.env.account.container('x' * 257)
|
||||||
self.assert_(not cont.create(),
|
self.assert_(not cont.create(None, None),
|
||||||
'created container with name %s' % (cont.name))
|
'created container with name %s' % (cont.name))
|
||||||
self.assert_status(400)
|
self.assert_status(400)
|
||||||
|
|
||||||
def testContainerExistenceCachingProblem(self):
|
def testContainerExistenceCachingProblem(self):
|
||||||
cont = self.env.account.container(Utils.create_name())
|
cont = self.env.account.container(Utils.create_name())
|
||||||
self.assertRaises(ResponseError, cont.files)
|
self.assertRaises(ResponseError, cont.files)
|
||||||
self.assert_(cont.create())
|
self.assert_(cont.create(None, None))
|
||||||
cont.files()
|
cont.files()
|
||||||
|
|
||||||
cont = self.env.account.container(Utils.create_name())
|
cont = self.env.account.container(Utils.create_name())
|
||||||
self.assertRaises(ResponseError, cont.files)
|
self.assertRaises(ResponseError, cont.files)
|
||||||
self.assert_(cont.create())
|
self.assert_(cont.create(None, None))
|
||||||
file_item = cont.file(Utils.create_name())
|
file_item = cont.file(Utils.create_name())
|
||||||
file_item.write_random()
|
file_item.write_random()
|
||||||
|
|
||||||
@ -634,7 +634,7 @@ class TestContainerPathsEnv(object):
|
|||||||
cls.file_size = 8
|
cls.file_size = 8
|
||||||
|
|
||||||
cls.container = cls.account.container(Utils.create_name())
|
cls.container = cls.account.container(Utils.create_name())
|
||||||
if not cls.container.create():
|
if not cls.container.create(None, None):
|
||||||
raise ResponseError(cls.conn.response)
|
raise ResponseError(cls.conn.response)
|
||||||
|
|
||||||
cls.files = [
|
cls.files = [
|
||||||
@ -824,7 +824,7 @@ class TestFileEnv(object):
|
|||||||
cls.account2.delete_containers()
|
cls.account2.delete_containers()
|
||||||
|
|
||||||
cls.container = cls.account.container(Utils.create_name())
|
cls.container = cls.account.container(Utils.create_name())
|
||||||
if not cls.container.create():
|
if not cls.container.create(None, None):
|
||||||
raise ResponseError(cls.conn.response)
|
raise ResponseError(cls.conn.response)
|
||||||
|
|
||||||
cls.file_size = 128
|
cls.file_size = 128
|
||||||
@ -856,7 +856,7 @@ class TestFile(Base):
|
|||||||
file_item.sync_metadata(metadata)
|
file_item.sync_metadata(metadata)
|
||||||
|
|
||||||
dest_cont = self.env.account.container(Utils.create_name())
|
dest_cont = self.env.account.container(Utils.create_name())
|
||||||
self.assert_(dest_cont.create())
|
self.assert_(dest_cont.create(None, None))
|
||||||
|
|
||||||
# copy both from within and across containers
|
# copy both from within and across containers
|
||||||
for cont in (self.env.container, dest_cont):
|
for cont in (self.env.container, dest_cont):
|
||||||
@ -886,7 +886,7 @@ class TestFile(Base):
|
|||||||
file_item.sync_metadata(metadata)
|
file_item.sync_metadata(metadata)
|
||||||
|
|
||||||
dest_cont = self.env.account.container(Utils.create_name())
|
dest_cont = self.env.account.container(Utils.create_name())
|
||||||
self.assert_(dest_cont.create())
|
self.assert_(dest_cont.create(None, None))
|
||||||
|
|
||||||
acct = self.env.conn.account_name
|
acct = self.env.conn.account_name
|
||||||
# copy both from within and across containers
|
# copy both from within and across containers
|
||||||
@ -909,9 +909,7 @@ class TestFile(Base):
|
|||||||
self.assert_(metadata == file_item.metadata)
|
self.assert_(metadata == file_item.metadata)
|
||||||
|
|
||||||
dest_cont = self.env.account2.container(Utils.create_name())
|
dest_cont = self.env.account2.container(Utils.create_name())
|
||||||
self.assert_(dest_cont.create(hdrs={
|
self.assert_(dest_cont.create(None, None))
|
||||||
'X-Container-Write': self.env.conn.user_acl
|
|
||||||
}))
|
|
||||||
|
|
||||||
acct = self.env.conn2.account_name
|
acct = self.env.conn2.account_name
|
||||||
# copy both with and without initial slash
|
# copy both with and without initial slash
|
||||||
@ -937,7 +935,7 @@ class TestFile(Base):
|
|||||||
file_item.write_random()
|
file_item.write_random()
|
||||||
|
|
||||||
dest_cont = self.env.account.container(Utils.create_name())
|
dest_cont = self.env.account.container(Utils.create_name())
|
||||||
self.assert_(dest_cont.create())
|
self.assert_(dest_cont.create(None, None))
|
||||||
|
|
||||||
for prefix in ('', '/'):
|
for prefix in ('', '/'):
|
||||||
# invalid source container
|
# invalid source container
|
||||||
@ -977,14 +975,9 @@ class TestFile(Base):
|
|||||||
file_item.write_random()
|
file_item.write_random()
|
||||||
|
|
||||||
dest_cont = self.env.account.container(Utils.create_name())
|
dest_cont = self.env.account.container(Utils.create_name())
|
||||||
self.assert_(dest_cont.create(hdrs={
|
self.assert_(dest_cont.create(None, None))
|
||||||
'X-Container-Read': self.env.conn2.user_acl
|
|
||||||
}))
|
|
||||||
dest_cont2 = self.env.account2.container(Utils.create_name())
|
dest_cont2 = self.env.account2.container(Utils.create_name())
|
||||||
self.assert_(dest_cont2.create(hdrs={
|
self.assert_(dest_cont2.create(None, None))
|
||||||
'X-Container-Write': self.env.conn.user_acl,
|
|
||||||
'X-Container-Read': self.env.conn.user_acl
|
|
||||||
}))
|
|
||||||
|
|
||||||
for acct, cont in ((acct, dest_cont), (acct2, dest_cont2)):
|
for acct, cont in ((acct, dest_cont), (acct2, dest_cont2)):
|
||||||
for prefix in ('', '/'):
|
for prefix in ('', '/'):
|
||||||
@ -1074,7 +1067,7 @@ class TestFile(Base):
|
|||||||
data = file_item.write_random()
|
data = file_item.write_random()
|
||||||
|
|
||||||
dest_cont = self.env.account.container(Utils.create_name())
|
dest_cont = self.env.account.container(Utils.create_name())
|
||||||
self.assert_(dest_cont.create())
|
self.assert_(dest_cont.create(None, None))
|
||||||
|
|
||||||
# copy both from within and across containers
|
# copy both from within and across containers
|
||||||
for cont in (self.env.container, dest_cont):
|
for cont in (self.env.container, dest_cont):
|
||||||
@ -1097,9 +1090,7 @@ class TestFile(Base):
|
|||||||
def testCopyFromAccountHeader(self):
|
def testCopyFromAccountHeader(self):
|
||||||
acct = self.env.conn.account_name
|
acct = self.env.conn.account_name
|
||||||
src_cont = self.env.account.container(Utils.create_name())
|
src_cont = self.env.account.container(Utils.create_name())
|
||||||
self.assert_(src_cont.create(hdrs={
|
self.assert_(src_cont.create(None, None))
|
||||||
'X-Container-Read': self.env.conn2.user_acl
|
|
||||||
}))
|
|
||||||
source_filename = Utils.create_name()
|
source_filename = Utils.create_name()
|
||||||
file_item = src_cont.file(source_filename)
|
file_item = src_cont.file(source_filename)
|
||||||
|
|
||||||
@ -1111,11 +1102,9 @@ class TestFile(Base):
|
|||||||
data = file_item.write_random()
|
data = file_item.write_random()
|
||||||
|
|
||||||
dest_cont = self.env.account.container(Utils.create_name())
|
dest_cont = self.env.account.container(Utils.create_name())
|
||||||
self.assert_(dest_cont.create())
|
self.assert_(dest_cont.create(None, None))
|
||||||
dest_cont2 = self.env.account2.container(Utils.create_name())
|
dest_cont2 = self.env.account2.container(Utils.create_name())
|
||||||
self.assert_(dest_cont2.create(hdrs={
|
self.assert_(dest_cont2.create(None, None))
|
||||||
'X-Container-Write': self.env.conn.user_acl
|
|
||||||
}))
|
|
||||||
|
|
||||||
for cont in (src_cont, dest_cont, dest_cont2):
|
for cont in (src_cont, dest_cont, dest_cont2):
|
||||||
# copy both with and without initial slash
|
# copy both with and without initial slash
|
||||||
@ -1171,14 +1160,12 @@ class TestFile(Base):
|
|||||||
def testCopyFromAccountHeader404s(self):
|
def testCopyFromAccountHeader404s(self):
|
||||||
acct = self.env.conn2.account_name
|
acct = self.env.conn2.account_name
|
||||||
src_cont = self.env.account2.container(Utils.create_name())
|
src_cont = self.env.account2.container(Utils.create_name())
|
||||||
self.assert_(src_cont.create(hdrs={
|
self.assert_(src_cont.create(None, None))
|
||||||
'X-Container-Read': self.env.conn.user_acl
|
|
||||||
}))
|
|
||||||
source_filename = Utils.create_name()
|
source_filename = Utils.create_name()
|
||||||
file_item = src_cont.file(source_filename)
|
file_item = src_cont.file(source_filename)
|
||||||
file_item.write_random()
|
file_item.write_random()
|
||||||
dest_cont = self.env.account.container(Utils.create_name())
|
dest_cont = self.env.account.container(Utils.create_name())
|
||||||
self.assert_(dest_cont.create())
|
self.assert_(dest_cont.create(None, None))
|
||||||
|
|
||||||
for prefix in ('', '/'):
|
for prefix in ('', '/'):
|
||||||
# invalid source container
|
# invalid source container
|
||||||
@ -1317,7 +1304,7 @@ class TestFile(Base):
|
|||||||
'zip': 'application/zip'}
|
'zip': 'application/zip'}
|
||||||
|
|
||||||
container = self.env.account.container(Utils.create_name())
|
container = self.env.account.container(Utils.create_name())
|
||||||
self.assert_(container.create())
|
self.assert_(container.create(None, None))
|
||||||
|
|
||||||
for i in file_types.keys():
|
for i in file_types.keys():
|
||||||
file_item = container.file(Utils.create_name() + '.' + i)
|
file_item = container.file(Utils.create_name() + '.' + i)
|
||||||
@ -1628,7 +1615,7 @@ class TestFile(Base):
|
|||||||
|
|
||||||
def testSerialization(self):
|
def testSerialization(self):
|
||||||
container = self.env.account.container(Utils.create_name())
|
container = self.env.account.container(Utils.create_name())
|
||||||
self.assert_(container.create())
|
self.assert_(container.create(None, None))
|
||||||
|
|
||||||
files = []
|
files = []
|
||||||
for i in (0, 1, 10, 100, 1000, 10000):
|
for i in (0, 1, 10, 100, 1000, 10000):
|
||||||
@ -1766,7 +1753,7 @@ class TestDloEnv(object):
|
|||||||
|
|
||||||
cls.container = cls.account.container(Utils.create_name())
|
cls.container = cls.account.container(Utils.create_name())
|
||||||
|
|
||||||
if not cls.container.create():
|
if not cls.container.create(None, None):
|
||||||
raise ResponseError(cls.conn.response)
|
raise ResponseError(cls.conn.response)
|
||||||
|
|
||||||
# avoid getting a prefix that stops halfway through an encoded
|
# avoid getting a prefix that stops halfway through an encoded
|
||||||
@ -1973,7 +1960,7 @@ class TestFileComparisonEnv(object):
|
|||||||
|
|
||||||
cls.container = cls.account.container(Utils.create_name())
|
cls.container = cls.account.container(Utils.create_name())
|
||||||
|
|
||||||
if not cls.container.create():
|
if not cls.container.create(None, None):
|
||||||
raise ResponseError(cls.conn.response)
|
raise ResponseError(cls.conn.response)
|
||||||
|
|
||||||
cls.file_count = 20
|
cls.file_count = 20
|
||||||
@ -2110,7 +2097,7 @@ class TestSloEnv(object):
|
|||||||
|
|
||||||
cls.container = cls.account.container(Utils.create_name())
|
cls.container = cls.account.container(Utils.create_name())
|
||||||
|
|
||||||
if not cls.container.create():
|
if not cls.container.create(None, None):
|
||||||
raise ResponseError(cls.conn.response)
|
raise ResponseError(cls.conn.response)
|
||||||
|
|
||||||
seg_info = {}
|
seg_info = {}
|
||||||
@ -2293,9 +2280,7 @@ class TestSlo(Base):
|
|||||||
# copy to different account
|
# copy to different account
|
||||||
acct = self.env.conn2.account_name
|
acct = self.env.conn2.account_name
|
||||||
dest_cont = self.env.account2.container(Utils.create_name())
|
dest_cont = self.env.account2.container(Utils.create_name())
|
||||||
self.assert_(dest_cont.create(hdrs={
|
self.assert_(dest_cont.create(None, None))
|
||||||
'X-Container-Write': self.env.conn.user_acl
|
|
||||||
}))
|
|
||||||
file_item = self.env.container.file("manifest-abcde")
|
file_item = self.env.container.file("manifest-abcde")
|
||||||
file_item.copy_account(acct, dest_cont, "copied-abcde")
|
file_item.copy_account(acct, dest_cont, "copied-abcde")
|
||||||
|
|
||||||
@ -2334,9 +2319,7 @@ class TestSlo(Base):
|
|||||||
# different account
|
# different account
|
||||||
acct = self.env.conn2.account_name
|
acct = self.env.conn2.account_name
|
||||||
dest_cont = self.env.account2.container(Utils.create_name())
|
dest_cont = self.env.account2.container(Utils.create_name())
|
||||||
self.assert_(dest_cont.create(hdrs={
|
self.assert_(dest_cont.create(None, None))
|
||||||
'X-Container-Write': self.env.conn.user_acl
|
|
||||||
}))
|
|
||||||
file_item.copy_account(acct,
|
file_item.copy_account(acct,
|
||||||
dest_cont,
|
dest_cont,
|
||||||
"copied-abcde-manifest-only",
|
"copied-abcde-manifest-only",
|
||||||
@ -2440,12 +2423,11 @@ class TestObjectVersioningEnv(object):
|
|||||||
prefix = Utils.create_name().decode("utf-8")[:10].encode("utf-8")
|
prefix = Utils.create_name().decode("utf-8")[:10].encode("utf-8")
|
||||||
|
|
||||||
cls.versions_container = cls.account.container(prefix + "-versions")
|
cls.versions_container = cls.account.container(prefix + "-versions")
|
||||||
if not cls.versions_container.create():
|
if not cls.versions_container.create(None, None):
|
||||||
raise ResponseError(cls.conn.response)
|
raise ResponseError(cls.conn.response)
|
||||||
|
|
||||||
cls.container = cls.account.container(prefix + "-objs")
|
cls.container = cls.account.container(prefix + "-objs")
|
||||||
if not cls.container.create(
|
if not cls.container.create(None, None):
|
||||||
hdrs={'X-Versions-Location': cls.versions_container.name}):
|
|
||||||
raise ResponseError(cls.conn.response)
|
raise ResponseError(cls.conn.response)
|
||||||
|
|
||||||
container_info = cls.container.info()
|
container_info = cls.container.info()
|
||||||
@ -2502,13 +2484,11 @@ class TestCrossPolicyObjectVersioningEnv(object):
|
|||||||
|
|
||||||
cls.versions_container = cls.account.container(prefix + "-versions")
|
cls.versions_container = cls.account.container(prefix + "-versions")
|
||||||
if not cls.versions_container.create(
|
if not cls.versions_container.create(
|
||||||
{'X-Storage-Policy': policy['name']}):
|
{'X-Storage-Policy': policy['name']}, None):
|
||||||
raise ResponseError(cls.conn.response)
|
raise ResponseError(cls.conn.response)
|
||||||
|
|
||||||
cls.container = cls.account.container(prefix + "-objs")
|
cls.container = cls.account.container(prefix + "-objs")
|
||||||
if not cls.container.create(
|
if not cls.container.create(None, None):
|
||||||
hdrs={'X-Versions-Location': cls.versions_container.name,
|
|
||||||
'X-Storage-Policy': version_policy['name']}):
|
|
||||||
raise ResponseError(cls.conn.response)
|
raise ResponseError(cls.conn.response)
|
||||||
|
|
||||||
container_info = cls.container.info()
|
container_info = cls.container.info()
|
||||||
@ -2621,7 +2601,7 @@ class TestObjectVersioning(Base):
|
|||||||
def test_versioning_check_acl(self):
|
def test_versioning_check_acl(self):
|
||||||
container = self.env.container
|
container = self.env.container
|
||||||
versions_container = self.env.versions_container
|
versions_container = self.env.versions_container
|
||||||
versions_container.create(hdrs={'X-Container-Read': '.r:*,.rlistings'})
|
versions_container.create(None, None)
|
||||||
|
|
||||||
obj_name = Utils.create_name()
|
obj_name = Utils.create_name()
|
||||||
versioned_obj = container.file(obj_name)
|
versioned_obj = container.file(obj_name)
|
||||||
@ -2690,7 +2670,7 @@ class TestTempurlEnv(object):
|
|||||||
})
|
})
|
||||||
|
|
||||||
cls.container = cls.account.container(Utils.create_name())
|
cls.container = cls.account.container(Utils.create_name())
|
||||||
if not cls.container.create():
|
if not cls.container.create(None, None):
|
||||||
raise ResponseError(cls.conn.response)
|
raise ResponseError(cls.conn.response)
|
||||||
|
|
||||||
cls.obj = cls.container.file(Utils.create_name())
|
cls.obj = cls.container.file(Utils.create_name())
|
||||||
@ -2872,9 +2852,9 @@ class TestContainerTempurlEnv(object):
|
|||||||
|
|
||||||
cls.container = cls.account.container(Utils.create_name())
|
cls.container = cls.account.container(Utils.create_name())
|
||||||
if not cls.container.create({
|
if not cls.container.create({
|
||||||
'x-container-meta-temp-url-key': cls.tempurl_key,
|
'x-container-meta-temp-url-key': cls.tempurl_key,
|
||||||
'x-container-meta-temp-url-key-2': cls.tempurl_key2,
|
'x-container-meta-temp-url-key-2': cls.tempurl_key2,
|
||||||
'x-container-read': cls.account2.name}):
|
'x-container-read': cls.account2.name}, None):
|
||||||
raise ResponseError(cls.conn.response)
|
raise ResponseError(cls.conn.response)
|
||||||
|
|
||||||
cls.obj = cls.container.file(Utils.create_name())
|
cls.obj = cls.container.file(Utils.create_name())
|
||||||
@ -3064,9 +3044,9 @@ class TestSloTempurlEnv(object):
|
|||||||
|
|
||||||
cls.manifest_container = cls.account.container(Utils.create_name())
|
cls.manifest_container = cls.account.container(Utils.create_name())
|
||||||
cls.segments_container = cls.account.container(Utils.create_name())
|
cls.segments_container = cls.account.container(Utils.create_name())
|
||||||
if not cls.manifest_container.create():
|
if not cls.manifest_container.create(None, None):
|
||||||
raise ResponseError(cls.conn.response)
|
raise ResponseError(cls.conn.response)
|
||||||
if not cls.segments_container.create():
|
if not cls.segments_container.create(None, None):
|
||||||
raise ResponseError(cls.conn.response)
|
raise ResponseError(cls.conn.response)
|
||||||
|
|
||||||
seg1 = cls.segments_container.file(Utils.create_name())
|
seg1 = cls.segments_container.file(Utils.create_name())
|
||||||
|
@ -15,7 +15,7 @@
|
|||||||
|
|
||||||
import unittest
|
import unittest
|
||||||
from swift.common.swob import Request, Response
|
from swift.common.swob import Request, Response
|
||||||
from swiftonfile.swift.common.middleware import check_constraints
|
from swiftonhpss.swift.common.middleware import check_constraints
|
||||||
from mock import Mock, patch
|
from mock import Mock, patch
|
||||||
from contextlib import nested
|
from contextlib import nested
|
||||||
|
|
||||||
@ -36,7 +36,7 @@ class TestConstraintsMiddleware(unittest.TestCase):
|
|||||||
|
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
self.conf = {
|
self.conf = {
|
||||||
'policies': 'swiftonfile,cephfs-policy'}
|
'policies': 'swiftonhpss,cephfs-policy'}
|
||||||
|
|
||||||
self.container1_info_mock = Mock()
|
self.container1_info_mock = Mock()
|
||||||
self.container1_info_mock.return_value = {'status': 0,
|
self.container1_info_mock.return_value = {'status': 0,
|
||||||
@ -56,7 +56,7 @@ class TestConstraintsMiddleware(unittest.TestCase):
|
|||||||
|
|
||||||
self.policies_mock = Mock()
|
self.policies_mock = Mock()
|
||||||
self.sof_policy_mock = Mock()
|
self.sof_policy_mock = Mock()
|
||||||
self.sof_policy_mock.name = 'swiftonfile'
|
self.sof_policy_mock.name = 'swiftonhpss'
|
||||||
attrs = {'get_by_index.return_value': self.sof_policy_mock }
|
attrs = {'get_by_index.return_value': self.sof_policy_mock }
|
||||||
self.policies_mock.configure_mock(**attrs)
|
self.policies_mock.configure_mock(**attrs)
|
||||||
|
|
||||||
@ -79,9 +79,9 @@ class TestConstraintsMiddleware(unittest.TestCase):
|
|||||||
path = '/V1.0/a/c2//o'
|
path = '/V1.0/a/c2//o'
|
||||||
|
|
||||||
with nested(
|
with nested(
|
||||||
patch("swiftonfile.swift.common.middleware.check_constraints."
|
patch("swiftonhpss.swift.common.middleware.check_constraints."
|
||||||
"get_container_info", self.container2_info_mock),
|
"get_container_info", self.container2_info_mock),
|
||||||
patch("swiftonfile.swift.common.middleware.check_constraints."
|
patch("swiftonhpss.swift.common.middleware.check_constraints."
|
||||||
"POLICIES", self.policies_mock)):
|
"POLICIES", self.policies_mock)):
|
||||||
resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'}
|
resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'}
|
||||||
).get_response(self.test_check)
|
).get_response(self.test_check)
|
||||||
@ -93,9 +93,9 @@ class TestConstraintsMiddleware(unittest.TestCase):
|
|||||||
path = '/V1.0/a/c2/o/'
|
path = '/V1.0/a/c2/o/'
|
||||||
|
|
||||||
with nested(
|
with nested(
|
||||||
patch("swiftonfile.swift.common.middleware.check_constraints."
|
patch("swiftonhpss.swift.common.middleware.check_constraints."
|
||||||
"get_container_info", self.container2_info_mock),
|
"get_container_info", self.container2_info_mock),
|
||||||
patch("swiftonfile.swift.common.middleware.check_constraints."
|
patch("swiftonhpss.swift.common.middleware.check_constraints."
|
||||||
"POLICIES", self.policies_mock)):
|
"POLICIES", self.policies_mock)):
|
||||||
resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'}
|
resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'}
|
||||||
).get_response(self.test_check)
|
).get_response(self.test_check)
|
||||||
@ -108,9 +108,9 @@ class TestConstraintsMiddleware(unittest.TestCase):
|
|||||||
path = '/V1.0/a/c2/.'
|
path = '/V1.0/a/c2/.'
|
||||||
|
|
||||||
with nested(
|
with nested(
|
||||||
patch("swiftonfile.swift.common.middleware.check_constraints."
|
patch("swiftonhpss.swift.common.middleware.check_constraints."
|
||||||
"get_container_info", self.container2_info_mock),
|
"get_container_info", self.container2_info_mock),
|
||||||
patch("swiftonfile.swift.common.middleware.check_constraints."
|
patch("swiftonhpss.swift.common.middleware.check_constraints."
|
||||||
"POLICIES", self.policies_mock)):
|
"POLICIES", self.policies_mock)):
|
||||||
resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'}
|
resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'}
|
||||||
).get_response(self.test_check)
|
).get_response(self.test_check)
|
||||||
@ -122,19 +122,19 @@ class TestConstraintsMiddleware(unittest.TestCase):
|
|||||||
longname = 'c' * 256
|
longname = 'c' * 256
|
||||||
path = '/V1.0/a/' + longname
|
path = '/V1.0/a/' + longname
|
||||||
resp = Request.blank(path, method='PUT',
|
resp = Request.blank(path, method='PUT',
|
||||||
headers={'X-Storage-Policy': 'swiftonfile'}
|
headers={'X-Storage-Policy': 'swiftonhpss'}
|
||||||
).get_response(self.test_check)
|
).get_response(self.test_check)
|
||||||
self.assertEquals(resp.status_int, 400)
|
self.assertEquals(resp.status_int, 400)
|
||||||
|
|
||||||
# test case where storage policy is not defined in header and
|
# test case where storage policy is not defined in header and
|
||||||
# container would be created in default policy, which happens to be
|
# container would be created in default policy, which happens to be
|
||||||
# a swiftonfile policy
|
# a swiftonhpss policy
|
||||||
default_policies_mock = Mock()
|
default_policies_mock = Mock()
|
||||||
sof_policy_mock = Mock()
|
sof_policy_mock = Mock()
|
||||||
sof_policy_mock.name = 'swiftonfile'
|
sof_policy_mock.name = 'swiftonhpss'
|
||||||
attrs = {'default.return_value': self.sof_policy_mock }
|
attrs = {'default.return_value': self.sof_policy_mock }
|
||||||
default_policies_mock.configure_mock(**attrs)
|
default_policies_mock.configure_mock(**attrs)
|
||||||
with patch("swiftonfile.swift.common.middleware.check_constraints."
|
with patch("swiftonhpss.swift.common.middleware.check_constraints."
|
||||||
"POLICIES", default_policies_mock):
|
"POLICIES", default_policies_mock):
|
||||||
resp = Request.blank(path, method='PUT').get_response(self.test_check)
|
resp = Request.blank(path, method='PUT').get_response(self.test_check)
|
||||||
self.assertEquals(resp.status_int, 400)
|
self.assertEquals(resp.status_int, 400)
|
||||||
@ -145,10 +145,10 @@ class TestConstraintsMiddleware(unittest.TestCase):
|
|||||||
path = '/V1.0/a/c2/' + longname
|
path = '/V1.0/a/c2/' + longname
|
||||||
|
|
||||||
with nested(
|
with nested(
|
||||||
patch("swiftonfile.swift.common.middleware."
|
patch("swiftonhpss.swift.common.middleware."
|
||||||
"check_constraints.get_container_info",
|
"check_constraints.get_container_info",
|
||||||
self.container2_info_mock),
|
self.container2_info_mock),
|
||||||
patch("swiftonfile.swift.common.middleware."
|
patch("swiftonhpss.swift.common.middleware."
|
||||||
"check_constraints.POLICIES", self.policies_mock)):
|
"check_constraints.POLICIES", self.policies_mock)):
|
||||||
resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'}
|
resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'}
|
||||||
).get_response(self.test_check)
|
).get_response(self.test_check)
|
||||||
@ -158,9 +158,9 @@ class TestConstraintsMiddleware(unittest.TestCase):
|
|||||||
path = '/V1.0/a/c2/' + longname
|
path = '/V1.0/a/c2/' + longname
|
||||||
|
|
||||||
with nested(
|
with nested(
|
||||||
patch("swiftonfile.swift.common.middleware.check_constraints."
|
patch("swiftonhpss.swift.common.middleware.check_constraints."
|
||||||
"get_container_info", self.container2_info_mock),
|
"get_container_info", self.container2_info_mock),
|
||||||
patch("swiftonfile.swift.common.middleware.check_constraints."
|
patch("swiftonhpss.swift.common.middleware.check_constraints."
|
||||||
"POLICIES", self.policies_mock)):
|
"POLICIES", self.policies_mock)):
|
||||||
resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'}
|
resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'}
|
||||||
).get_response(self.test_check)
|
).get_response(self.test_check)
|
||||||
@ -170,7 +170,7 @@ class TestConstraintsMiddleware(unittest.TestCase):
|
|||||||
def test_PUT_object_with_policy0(self):
|
def test_PUT_object_with_policy0(self):
|
||||||
path = '/V1.0/a/c1//o'
|
path = '/V1.0/a/c1//o'
|
||||||
|
|
||||||
with patch("swiftonfile.swift.common.middleware."
|
with patch("swiftonhpss.swift.common.middleware."
|
||||||
"check_constraints.get_container_info",
|
"check_constraints.get_container_info",
|
||||||
self.container1_info_mock):
|
self.container1_info_mock):
|
||||||
resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'}
|
resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'}
|
||||||
@ -180,7 +180,7 @@ class TestConstraintsMiddleware(unittest.TestCase):
|
|||||||
longname = 'o' * 222
|
longname = 'o' * 222
|
||||||
path = '/V1.0/a/c2/' + longname
|
path = '/V1.0/a/c2/' + longname
|
||||||
|
|
||||||
with patch("swiftonfile.swift.common.middleware.check_constraints."
|
with patch("swiftonhpss.swift.common.middleware.check_constraints."
|
||||||
"get_container_info", self.container1_info_mock):
|
"get_container_info", self.container1_info_mock):
|
||||||
resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'}
|
resp = Request.blank(path, environ={'REQUEST_METHOD': 'PUT'}
|
||||||
).get_response(self.test_check)
|
).get_response(self.test_check)
|
||||||
|
@ -15,7 +15,7 @@
|
|||||||
|
|
||||||
import unittest
|
import unittest
|
||||||
from mock import patch, Mock
|
from mock import patch, Mock
|
||||||
from swiftonfile.swift.common import constraints as cnt
|
from swiftonhpss.swift.common import constraints as cnt
|
||||||
|
|
||||||
|
|
||||||
def mock_check_object_creation(*args, **kwargs):
|
def mock_check_object_creation(*args, **kwargs):
|
||||||
|
@ -22,8 +22,8 @@ from nose import SkipTest
|
|||||||
from mock import patch, Mock
|
from mock import patch, Mock
|
||||||
from time import sleep
|
from time import sleep
|
||||||
from tempfile import mkdtemp, mkstemp
|
from tempfile import mkdtemp, mkstemp
|
||||||
from swiftonfile.swift.common import fs_utils as fs
|
from swiftonhpss.swift.common import fs_utils as fs
|
||||||
from swiftonfile.swift.common.exceptions import SwiftOnFileSystemOSError
|
from swiftonhpss.swift.common.exceptions import SwiftOnFileSystemOSError
|
||||||
from swift.common.exceptions import DiskFileNoSpace
|
from swift.common.exceptions import DiskFileNoSpace
|
||||||
|
|
||||||
|
|
||||||
|
@ -27,10 +27,10 @@ import shutil
|
|||||||
import cPickle as pickle
|
import cPickle as pickle
|
||||||
from collections import defaultdict
|
from collections import defaultdict
|
||||||
from mock import patch, Mock
|
from mock import patch, Mock
|
||||||
from swiftonfile.swift.common import utils
|
from swiftonhpss.swift.common import utils
|
||||||
from swiftonfile.swift.common.utils import deserialize_metadata, \
|
from swiftonhpss.swift.common.utils import deserialize_metadata, \
|
||||||
serialize_metadata, PICKLE_PROTOCOL
|
serialize_metadata, PICKLE_PROTOCOL
|
||||||
from swiftonfile.swift.common.exceptions import SwiftOnFileSystemOSError, \
|
from swiftonhpss.swift.common.exceptions import SwiftOnFileSystemOSError, \
|
||||||
SwiftOnFileSystemIOError
|
SwiftOnFileSystemIOError
|
||||||
from swift.common.exceptions import DiskFileNoSpace
|
from swift.common.exceptions import DiskFileNoSpace
|
||||||
|
|
||||||
@ -352,7 +352,7 @@ class TestUtils(unittest.TestCase):
|
|||||||
pickled_md = pickle.dumps(orig_md, PICKLE_PROTOCOL)
|
pickled_md = pickle.dumps(orig_md, PICKLE_PROTOCOL)
|
||||||
_m_pickle_loads = Mock(return_value={})
|
_m_pickle_loads = Mock(return_value={})
|
||||||
utils.read_pickled_metadata = True
|
utils.read_pickled_metadata = True
|
||||||
with patch('swiftonfile.swift.common.utils.pickle.loads',
|
with patch('swiftonhpss.swift.common.utils.pickle.loads',
|
||||||
_m_pickle_loads):
|
_m_pickle_loads):
|
||||||
# pickled
|
# pickled
|
||||||
md = utils.deserialize_metadata(pickled_md)
|
md = utils.deserialize_metadata(pickled_md)
|
||||||
@ -375,7 +375,7 @@ class TestUtils(unittest.TestCase):
|
|||||||
|
|
||||||
def test_deserialize_metadata_json(self):
|
def test_deserialize_metadata_json(self):
|
||||||
_m_json_loads = Mock(return_value={})
|
_m_json_loads = Mock(return_value={})
|
||||||
with patch('swiftonfile.swift.common.utils.json.loads',
|
with patch('swiftonhpss.swift.common.utils.json.loads',
|
||||||
_m_json_loads):
|
_m_json_loads):
|
||||||
utils.deserialize_metadata("not_json")
|
utils.deserialize_metadata("not_json")
|
||||||
self.assertFalse(_m_json_loads.called)
|
self.assertFalse(_m_json_loads.called)
|
||||||
@ -538,7 +538,7 @@ class TestUtils(unittest.TestCase):
|
|||||||
try:
|
try:
|
||||||
fpp = os.path.join(td, 'pp')
|
fpp = os.path.join(td, 'pp')
|
||||||
# FIXME: Remove this patch when coverage.py can handle eventlet
|
# FIXME: Remove this patch when coverage.py can handle eventlet
|
||||||
with patch("swiftonfile.swift.common.fs_utils.do_fsync",
|
with patch("swiftonhpss.swift.common.fs_utils.do_fsync",
|
||||||
_mock_os_fsync):
|
_mock_os_fsync):
|
||||||
utils.write_pickle('pickled peppers', fpp)
|
utils.write_pickle('pickled peppers', fpp)
|
||||||
with open(fpp, "rb") as f:
|
with open(fpp, "rb") as f:
|
||||||
@ -555,7 +555,7 @@ class TestUtils(unittest.TestCase):
|
|||||||
fpp = os.path.join(td, 'pp')
|
fpp = os.path.join(td, 'pp')
|
||||||
# Also test an explicity pickle protocol
|
# Also test an explicity pickle protocol
|
||||||
# FIXME: Remove this patch when coverage.py can handle eventlet
|
# FIXME: Remove this patch when coverage.py can handle eventlet
|
||||||
with patch("swiftonfile.swift.common.fs_utils.do_fsync",
|
with patch("swiftonhpss.swift.common.fs_utils.do_fsync",
|
||||||
_mock_os_fsync):
|
_mock_os_fsync):
|
||||||
utils.write_pickle('pickled peppers', fpp, tmp=tf.name,
|
utils.write_pickle('pickled peppers', fpp, tmp=tf.name,
|
||||||
pickle_protocol=2)
|
pickle_protocol=2)
|
||||||
|
@ -13,7 +13,7 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
""" Tests for swiftonfile.swift.obj.diskfile """
|
""" Tests for swiftonhpss.swift.obj.diskfile """
|
||||||
|
|
||||||
import os
|
import os
|
||||||
import stat
|
import stat
|
||||||
@ -27,18 +27,18 @@ from mock import Mock, patch
|
|||||||
from hashlib import md5
|
from hashlib import md5
|
||||||
from copy import deepcopy
|
from copy import deepcopy
|
||||||
from contextlib import nested
|
from contextlib import nested
|
||||||
from swiftonfile.swift.common.exceptions import AlreadyExistsAsDir, \
|
from swiftonhpss.swift.common.exceptions import AlreadyExistsAsDir, \
|
||||||
AlreadyExistsAsFile
|
AlreadyExistsAsFile
|
||||||
from swift.common.exceptions import DiskFileNoSpace, DiskFileNotOpen, \
|
from swift.common.exceptions import DiskFileNoSpace, DiskFileNotOpen, \
|
||||||
DiskFileNotExist, DiskFileExpired
|
DiskFileNotExist, DiskFileExpired
|
||||||
from swift.common.utils import ThreadPool
|
from swift.common.utils import ThreadPool
|
||||||
|
|
||||||
from swiftonfile.swift.common.exceptions import SwiftOnFileSystemOSError
|
from swiftonhpss.swift.common.exceptions import SwiftOnFileSystemOSError
|
||||||
import swiftonfile.swift.common.utils
|
import swiftonhpss.swift.common.utils
|
||||||
from swiftonfile.swift.common.utils import normalize_timestamp
|
from swiftonhpss.swift.common.utils import normalize_timestamp
|
||||||
import swiftonfile.swift.obj.diskfile
|
import swiftonhpss.swift.obj.diskfile
|
||||||
from swiftonfile.swift.obj.diskfile import DiskFileWriter, DiskFileManager
|
from swiftonhpss.swift.obj.diskfile import DiskFileWriter, DiskFileManager
|
||||||
from swiftonfile.swift.common.utils import DEFAULT_UID, DEFAULT_GID, \
|
from swiftonhpss.swift.common.utils import DEFAULT_UID, DEFAULT_GID, \
|
||||||
X_OBJECT_TYPE, DIR_OBJECT
|
X_OBJECT_TYPE, DIR_OBJECT
|
||||||
|
|
||||||
from test.unit.common.test_utils import _initxattr, _destroyxattr
|
from test.unit.common.test_utils import _initxattr, _destroyxattr
|
||||||
@ -82,7 +82,7 @@ class MockException(Exception):
|
|||||||
|
|
||||||
|
|
||||||
def _mock_rmobjdir(p):
|
def _mock_rmobjdir(p):
|
||||||
raise MockException("swiftonfile.swift.obj.diskfile.rmobjdir() called")
|
raise MockException("swiftonhpss.swift.obj.diskfile.rmobjdir() called")
|
||||||
|
|
||||||
|
|
||||||
def _mock_do_fsync(fd):
|
def _mock_do_fsync(fd):
|
||||||
@ -101,12 +101,12 @@ def _mock_renamer(a, b):
|
|||||||
raise MockRenamerCalled()
|
raise MockRenamerCalled()
|
||||||
|
|
||||||
class TestDiskFileWriter(unittest.TestCase):
|
class TestDiskFileWriter(unittest.TestCase):
|
||||||
""" Tests for swiftonfile.swift.obj.diskfile.DiskFileWriter """
|
""" Tests for swiftonhpss.swift.obj.diskfile.DiskFileWriter """
|
||||||
|
|
||||||
def test_close(self):
|
def test_close(self):
|
||||||
dw = DiskFileWriter(100, 'a', None)
|
dw = DiskFileWriter(100, 'a', None)
|
||||||
mock_close = Mock()
|
mock_close = Mock()
|
||||||
with patch("swiftonfile.swift.obj.diskfile.do_close", mock_close):
|
with patch("swiftonhpss.swift.obj.diskfile.do_close", mock_close):
|
||||||
# It should call do_close
|
# It should call do_close
|
||||||
self.assertEqual(100, dw._fd)
|
self.assertEqual(100, dw._fd)
|
||||||
dw.close()
|
dw.close()
|
||||||
@ -120,7 +120,7 @@ class TestDiskFileWriter(unittest.TestCase):
|
|||||||
self.assertEqual(1, mock_close.call_count)
|
self.assertEqual(1, mock_close.call_count)
|
||||||
|
|
||||||
class TestDiskFile(unittest.TestCase):
|
class TestDiskFile(unittest.TestCase):
|
||||||
""" Tests for swiftonfile.swift.obj.diskfile """
|
""" Tests for swiftonhpss.swift.obj.diskfile """
|
||||||
|
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
self._orig_tpool_exc = tpool.execute
|
self._orig_tpool_exc = tpool.execute
|
||||||
@ -128,18 +128,18 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
self.lg = FakeLogger()
|
self.lg = FakeLogger()
|
||||||
_initxattr()
|
_initxattr()
|
||||||
_mock_clear_metadata()
|
_mock_clear_metadata()
|
||||||
self._saved_df_wm = swiftonfile.swift.obj.diskfile.write_metadata
|
self._saved_df_wm = swiftonhpss.swift.obj.diskfile.write_metadata
|
||||||
self._saved_df_rm = swiftonfile.swift.obj.diskfile.read_metadata
|
self._saved_df_rm = swiftonhpss.swift.obj.diskfile.read_metadata
|
||||||
swiftonfile.swift.obj.diskfile.write_metadata = _mock_write_metadata
|
swiftonhpss.swift.obj.diskfile.write_metadata = _mock_write_metadata
|
||||||
swiftonfile.swift.obj.diskfile.read_metadata = _mock_read_metadata
|
swiftonhpss.swift.obj.diskfile.read_metadata = _mock_read_metadata
|
||||||
self._saved_ut_wm = swiftonfile.swift.common.utils.write_metadata
|
self._saved_ut_wm = swiftonhpss.swift.common.utils.write_metadata
|
||||||
self._saved_ut_rm = swiftonfile.swift.common.utils.read_metadata
|
self._saved_ut_rm = swiftonhpss.swift.common.utils.read_metadata
|
||||||
swiftonfile.swift.common.utils.write_metadata = _mock_write_metadata
|
swiftonhpss.swift.common.utils.write_metadata = _mock_write_metadata
|
||||||
swiftonfile.swift.common.utils.read_metadata = _mock_read_metadata
|
swiftonhpss.swift.common.utils.read_metadata = _mock_read_metadata
|
||||||
self._saved_do_fsync = swiftonfile.swift.obj.diskfile.do_fsync
|
self._saved_do_fsync = swiftonhpss.swift.obj.diskfile.do_fsync
|
||||||
swiftonfile.swift.obj.diskfile.do_fsync = _mock_do_fsync
|
swiftonhpss.swift.obj.diskfile.do_fsync = _mock_do_fsync
|
||||||
self._saved_fallocate = swiftonfile.swift.obj.diskfile.fallocate
|
self._saved_fallocate = swiftonhpss.swift.obj.diskfile.fallocate
|
||||||
swiftonfile.swift.obj.diskfile.fallocate = _mock_fallocate
|
swiftonhpss.swift.obj.diskfile.fallocate = _mock_fallocate
|
||||||
self.td = tempfile.mkdtemp()
|
self.td = tempfile.mkdtemp()
|
||||||
self.conf = dict(devices=self.td, mb_per_sync=2,
|
self.conf = dict(devices=self.td, mb_per_sync=2,
|
||||||
keep_cache_size=(1024 * 1024), mount_check=False)
|
keep_cache_size=(1024 * 1024), mount_check=False)
|
||||||
@ -149,12 +149,12 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
tpool.execute = self._orig_tpool_exc
|
tpool.execute = self._orig_tpool_exc
|
||||||
self.lg = None
|
self.lg = None
|
||||||
_destroyxattr()
|
_destroyxattr()
|
||||||
swiftonfile.swift.obj.diskfile.write_metadata = self._saved_df_wm
|
swiftonhpss.swift.obj.diskfile.write_metadata = self._saved_df_wm
|
||||||
swiftonfile.swift.obj.diskfile.read_metadata = self._saved_df_rm
|
swiftonhpss.swift.obj.diskfile.read_metadata = self._saved_df_rm
|
||||||
swiftonfile.swift.common.utils.write_metadata = self._saved_ut_wm
|
swiftonhpss.swift.common.utils.write_metadata = self._saved_ut_wm
|
||||||
swiftonfile.swift.common.utils.read_metadata = self._saved_ut_rm
|
swiftonhpss.swift.common.utils.read_metadata = self._saved_ut_rm
|
||||||
swiftonfile.swift.obj.diskfile.do_fsync = self._saved_do_fsync
|
swiftonhpss.swift.obj.diskfile.do_fsync = self._saved_do_fsync
|
||||||
swiftonfile.swift.obj.diskfile.fallocate = self._saved_fallocate
|
swiftonhpss.swift.obj.diskfile.fallocate = self._saved_fallocate
|
||||||
shutil.rmtree(self.td)
|
shutil.rmtree(self.td)
|
||||||
|
|
||||||
def _get_diskfile(self, d, p, a, c, o, **kwargs):
|
def _get_diskfile(self, d, p, a, c, o, **kwargs):
|
||||||
@ -214,7 +214,7 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
def test_open_and_close(self):
|
def test_open_and_close(self):
|
||||||
mock_close = Mock()
|
mock_close = Mock()
|
||||||
|
|
||||||
with mock.patch("swiftonfile.swift.obj.diskfile.do_close", mock_close):
|
with mock.patch("swiftonhpss.swift.obj.diskfile.do_close", mock_close):
|
||||||
gdf = self._create_and_get_diskfile("vol0", "p57", "ufo47",
|
gdf = self._create_and_get_diskfile("vol0", "p57", "ufo47",
|
||||||
"bar", "z")
|
"bar", "z")
|
||||||
with gdf.open():
|
with gdf.open():
|
||||||
@ -314,7 +314,7 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
closed[0] = True
|
closed[0] = True
|
||||||
os.close(fd[0])
|
os.close(fd[0])
|
||||||
|
|
||||||
with mock.patch("swiftonfile.swift.obj.diskfile.do_close", mock_close):
|
with mock.patch("swiftonhpss.swift.obj.diskfile.do_close", mock_close):
|
||||||
gdf = self._create_and_get_diskfile("vol0", "p57", "ufo47", "bar", "z")
|
gdf = self._create_and_get_diskfile("vol0", "p57", "ufo47", "bar", "z")
|
||||||
with gdf.open():
|
with gdf.open():
|
||||||
assert gdf._fd is not None
|
assert gdf._fd is not None
|
||||||
@ -367,7 +367,7 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
closed[0] = True
|
closed[0] = True
|
||||||
os.close(fd[0])
|
os.close(fd[0])
|
||||||
|
|
||||||
with mock.patch("swiftonfile.swift.obj.diskfile.do_close", mock_close):
|
with mock.patch("swiftonhpss.swift.obj.diskfile.do_close", mock_close):
|
||||||
gdf = self._create_and_get_diskfile("vol0", "p57", "ufo47", "bar", "z", fsize=1024*1024*2)
|
gdf = self._create_and_get_diskfile("vol0", "p57", "ufo47", "bar", "z", fsize=1024*1024*2)
|
||||||
with gdf.open():
|
with gdf.open():
|
||||||
assert gdf._fd is not None
|
assert gdf._fd is not None
|
||||||
@ -394,7 +394,7 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
try:
|
try:
|
||||||
chunks = [ck for ck in reader]
|
chunks = [ck for ck in reader]
|
||||||
assert len(chunks) == 0, repr(chunks)
|
assert len(chunks) == 0, repr(chunks)
|
||||||
with mock.patch("swiftonfile.swift.obj.diskfile.do_close",
|
with mock.patch("swiftonhpss.swift.obj.diskfile.do_close",
|
||||||
our_do_close):
|
our_do_close):
|
||||||
reader.close()
|
reader.close()
|
||||||
assert not called[0]
|
assert not called[0]
|
||||||
@ -443,11 +443,11 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
assert u == DEFAULT_UID
|
assert u == DEFAULT_UID
|
||||||
assert g == DEFAULT_GID
|
assert g == DEFAULT_GID
|
||||||
|
|
||||||
dc = swiftonfile.swift.obj.diskfile.do_chown
|
dc = swiftonhpss.swift.obj.diskfile.do_chown
|
||||||
swiftonfile.swift.obj.diskfile.do_chown = _mock_do_chown
|
swiftonhpss.swift.obj.diskfile.do_chown = _mock_do_chown
|
||||||
self.assertRaises(
|
self.assertRaises(
|
||||||
AlreadyExistsAsFile, gdf._create_dir_object, the_dir)
|
AlreadyExistsAsFile, gdf._create_dir_object, the_dir)
|
||||||
swiftonfile.swift.obj.diskfile.do_chown = dc
|
swiftonhpss.swift.obj.diskfile.do_chown = dc
|
||||||
self.assertFalse(os.path.isdir(the_dir))
|
self.assertFalse(os.path.isdir(the_dir))
|
||||||
self.assertFalse(_mapit(the_dir) in _metadata)
|
self.assertFalse(_mapit(the_dir) in _metadata)
|
||||||
|
|
||||||
@ -465,11 +465,11 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
assert u == DEFAULT_UID
|
assert u == DEFAULT_UID
|
||||||
assert g == DEFAULT_GID
|
assert g == DEFAULT_GID
|
||||||
|
|
||||||
dc = swiftonfile.swift.obj.diskfile.do_chown
|
dc = swiftonhpss.swift.obj.diskfile.do_chown
|
||||||
swiftonfile.swift.obj.diskfile.do_chown = _mock_do_chown
|
swiftonhpss.swift.obj.diskfile.do_chown = _mock_do_chown
|
||||||
self.assertRaises(
|
self.assertRaises(
|
||||||
AlreadyExistsAsFile, gdf._create_dir_object, the_dir)
|
AlreadyExistsAsFile, gdf._create_dir_object, the_dir)
|
||||||
swiftonfile.swift.obj.diskfile.do_chown = dc
|
swiftonhpss.swift.obj.diskfile.do_chown = dc
|
||||||
self.assertFalse(os.path.isdir(the_dir))
|
self.assertFalse(os.path.isdir(the_dir))
|
||||||
self.assertFalse(_mapit(the_dir) in _metadata)
|
self.assertFalse(_mapit(the_dir) in _metadata)
|
||||||
|
|
||||||
@ -631,7 +631,7 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
'ETag': 'etag',
|
'ETag': 'etag',
|
||||||
'X-Timestamp': 'ts',
|
'X-Timestamp': 'ts',
|
||||||
'Content-Type': 'application/directory'}
|
'Content-Type': 'application/directory'}
|
||||||
with gdf.create() as dw:
|
with gdf.create(None, None) as dw:
|
||||||
dw.put(newmd)
|
dw.put(newmd)
|
||||||
assert gdf._data_file == the_dir
|
assert gdf._data_file == the_dir
|
||||||
for key, val in newmd.items():
|
for key, val in newmd.items():
|
||||||
@ -651,7 +651,7 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
# how this can happen normally.
|
# how this can happen normally.
|
||||||
newmd['Content-Type'] = ''
|
newmd['Content-Type'] = ''
|
||||||
newmd['X-Object-Meta-test'] = '1234'
|
newmd['X-Object-Meta-test'] = '1234'
|
||||||
with gdf.create() as dw:
|
with gdf.create(None, None) as dw:
|
||||||
try:
|
try:
|
||||||
# FIXME: We should probably be able to detect in .create()
|
# FIXME: We should probably be able to detect in .create()
|
||||||
# when the target file name already exists as a directory to
|
# when the target file name already exists as a directory to
|
||||||
@ -690,7 +690,7 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
'Content-Length': '5',
|
'Content-Length': '5',
|
||||||
}
|
}
|
||||||
|
|
||||||
with gdf.create() as dw:
|
with gdf.create(None, None) as dw:
|
||||||
assert dw._tmppath is not None
|
assert dw._tmppath is not None
|
||||||
tmppath = dw._tmppath
|
tmppath = dw._tmppath
|
||||||
dw.write(body)
|
dw.write(body)
|
||||||
@ -725,7 +725,7 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
|
|
||||||
with mock.patch("os.open", mock_open):
|
with mock.patch("os.open", mock_open):
|
||||||
try:
|
try:
|
||||||
with gdf.create() as dw:
|
with gdf.create(None, None) as dw:
|
||||||
assert dw._tmppath is not None
|
assert dw._tmppath is not None
|
||||||
dw.write(body)
|
dw.write(body)
|
||||||
dw.put(metadata)
|
dw.put(metadata)
|
||||||
@ -762,10 +762,10 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
def mock_rename(*args, **kwargs):
|
def mock_rename(*args, **kwargs):
|
||||||
raise OSError(errno.ENOENT, os.strerror(errno.ENOENT))
|
raise OSError(errno.ENOENT, os.strerror(errno.ENOENT))
|
||||||
|
|
||||||
with mock.patch("swiftonfile.swift.obj.diskfile.sleep", mock_sleep):
|
with mock.patch("swiftonhpss.swift.obj.diskfile.sleep", mock_sleep):
|
||||||
with mock.patch("os.rename", mock_rename):
|
with mock.patch("os.rename", mock_rename):
|
||||||
try:
|
try:
|
||||||
with gdf.create() as dw:
|
with gdf.create(None, None) as dw:
|
||||||
assert dw._tmppath is not None
|
assert dw._tmppath is not None
|
||||||
tmppath = dw._tmppath
|
tmppath = dw._tmppath
|
||||||
dw.write(body)
|
dw.write(body)
|
||||||
@ -797,7 +797,7 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
'Content-Length': '5',
|
'Content-Length': '5',
|
||||||
}
|
}
|
||||||
|
|
||||||
with gdf.create() as dw:
|
with gdf.create(None, None) as dw:
|
||||||
assert dw._tmppath is not None
|
assert dw._tmppath is not None
|
||||||
tmppath = dw._tmppath
|
tmppath = dw._tmppath
|
||||||
dw.write(body)
|
dw.write(body)
|
||||||
@ -904,7 +904,7 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
gdf = self._get_diskfile("vol0", "p57", "ufo47", "bar", "dir/z")
|
gdf = self._get_diskfile("vol0", "p57", "ufo47", "bar", "dir/z")
|
||||||
saved_tmppath = ''
|
saved_tmppath = ''
|
||||||
saved_fd = None
|
saved_fd = None
|
||||||
with gdf.create() as dw:
|
with gdf.create(None, None) as dw:
|
||||||
assert gdf._put_datadir == os.path.join(self.td, "vol0", "ufo47", "bar", "dir")
|
assert gdf._put_datadir == os.path.join(self.td, "vol0", "ufo47", "bar", "dir")
|
||||||
assert os.path.isdir(gdf._put_datadir)
|
assert os.path.isdir(gdf._put_datadir)
|
||||||
saved_tmppath = dw._tmppath
|
saved_tmppath = dw._tmppath
|
||||||
@ -927,7 +927,7 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
def test_create_err_on_close(self):
|
def test_create_err_on_close(self):
|
||||||
gdf = self._get_diskfile("vol0", "p57", "ufo47", "bar", "dir/z")
|
gdf = self._get_diskfile("vol0", "p57", "ufo47", "bar", "dir/z")
|
||||||
saved_tmppath = ''
|
saved_tmppath = ''
|
||||||
with gdf.create() as dw:
|
with gdf.create(None, None) as dw:
|
||||||
assert gdf._put_datadir == os.path.join(self.td, "vol0", "ufo47", "bar", "dir")
|
assert gdf._put_datadir == os.path.join(self.td, "vol0", "ufo47", "bar", "dir")
|
||||||
assert os.path.isdir(gdf._put_datadir)
|
assert os.path.isdir(gdf._put_datadir)
|
||||||
saved_tmppath = dw._tmppath
|
saved_tmppath = dw._tmppath
|
||||||
@ -942,7 +942,7 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
def test_create_err_on_unlink(self):
|
def test_create_err_on_unlink(self):
|
||||||
gdf = self._get_diskfile("vol0", "p57", "ufo47", "bar", "dir/z")
|
gdf = self._get_diskfile("vol0", "p57", "ufo47", "bar", "dir/z")
|
||||||
saved_tmppath = ''
|
saved_tmppath = ''
|
||||||
with gdf.create() as dw:
|
with gdf.create(None, None) as dw:
|
||||||
assert gdf._put_datadir == os.path.join(self.td, "vol0", "ufo47", "bar", "dir")
|
assert gdf._put_datadir == os.path.join(self.td, "vol0", "ufo47", "bar", "dir")
|
||||||
assert os.path.isdir(gdf._put_datadir)
|
assert os.path.isdir(gdf._put_datadir)
|
||||||
saved_tmppath = dw._tmppath
|
saved_tmppath = dw._tmppath
|
||||||
@ -968,8 +968,8 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
}
|
}
|
||||||
|
|
||||||
_mock_do_unlink = Mock() # Shouldn't be called
|
_mock_do_unlink = Mock() # Shouldn't be called
|
||||||
with patch("swiftonfile.swift.obj.diskfile.do_unlink", _mock_do_unlink):
|
with patch("swiftonhpss.swift.obj.diskfile.do_unlink", _mock_do_unlink):
|
||||||
with gdf.create() as dw:
|
with gdf.create(None, None) as dw:
|
||||||
assert dw._tmppath is not None
|
assert dw._tmppath is not None
|
||||||
tmppath = dw._tmppath
|
tmppath = dw._tmppath
|
||||||
dw.write(body)
|
dw.write(body)
|
||||||
@ -994,11 +994,11 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
_m_log = Mock()
|
_m_log = Mock()
|
||||||
|
|
||||||
with nested(
|
with nested(
|
||||||
patch("swiftonfile.swift.obj.diskfile.do_open", _m_do_open),
|
patch("swiftonhpss.swift.obj.diskfile.do_open", _m_do_open),
|
||||||
patch("swiftonfile.swift.obj.diskfile.do_fstat", _m_do_fstat),
|
patch("swiftonhpss.swift.obj.diskfile.do_fstat", _m_do_fstat),
|
||||||
patch("swiftonfile.swift.obj.diskfile.read_metadata", _m_rmd),
|
patch("swiftonhpss.swift.obj.diskfile.read_metadata", _m_rmd),
|
||||||
patch("swiftonfile.swift.obj.diskfile.do_close", _m_do_close),
|
patch("swiftonhpss.swift.obj.diskfile.do_close", _m_do_close),
|
||||||
patch("swiftonfile.swift.obj.diskfile.logging.warn", _m_log)):
|
patch("swiftonhpss.swift.obj.diskfile.logging.warn", _m_log)):
|
||||||
gdf = self._get_diskfile("vol0", "p57", "ufo47", "bar", "z")
|
gdf = self._get_diskfile("vol0", "p57", "ufo47", "bar", "z")
|
||||||
try:
|
try:
|
||||||
with gdf.open():
|
with gdf.open():
|
||||||
@ -1033,7 +1033,7 @@ class TestDiskFile(unittest.TestCase):
|
|||||||
|
|
||||||
_m_do_close = Mock()
|
_m_do_close = Mock()
|
||||||
|
|
||||||
with patch("swiftonfile.swift.obj.diskfile.do_close", _m_do_close):
|
with patch("swiftonhpss.swift.obj.diskfile.do_close", _m_do_close):
|
||||||
gdf = self._get_diskfile("vol0", "p57", "ufo47", "bar", "z")
|
gdf = self._get_diskfile("vol0", "p57", "ufo47", "bar", "z")
|
||||||
try:
|
try:
|
||||||
with gdf.open():
|
with gdf.open():
|
||||||
|
@ -13,7 +13,7 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
""" Tests for swiftonfile.swift.obj.server subclass """
|
""" Tests for swiftonhpss.swift.obj.server subclass """
|
||||||
|
|
||||||
import os
|
import os
|
||||||
from tempfile import mkdtemp
|
from tempfile import mkdtemp
|
||||||
@ -22,15 +22,15 @@ from eventlet import tpool
|
|||||||
|
|
||||||
from swift.common.swob import Request
|
from swift.common.swob import Request
|
||||||
|
|
||||||
from swiftonfile.swift.obj import server as object_server
|
from swiftonhpss.swift.obj import server as object_server
|
||||||
from swiftonfile.swift.obj import diskfile
|
from swiftonhpss.swift.obj import diskfile
|
||||||
|
|
||||||
import unittest
|
import unittest
|
||||||
from test.unit import debug_logger
|
from test.unit import debug_logger
|
||||||
|
|
||||||
|
|
||||||
class TestObjectController(unittest.TestCase):
|
class TestObjectController(unittest.TestCase):
|
||||||
"""Test swiftonfile.swift.obj.server.ObjectController"""
|
"""Test swiftonhpss.swift.obj.server.ObjectController"""
|
||||||
|
|
||||||
def setUp(self):
|
def setUp(self):
|
||||||
self.tmpdir = mkdtemp()
|
self.tmpdir = mkdtemp()
|
||||||
|
@ -13,19 +13,19 @@
|
|||||||
# See the License for the specific language governing permissions and
|
# See the License for the specific language governing permissions and
|
||||||
# limitations under the License.
|
# limitations under the License.
|
||||||
|
|
||||||
""" Tests for swiftonfile.swift """
|
""" Tests for swiftonhpss.swift """
|
||||||
|
|
||||||
import os
|
import os
|
||||||
import unittest
|
import unittest
|
||||||
import shutil
|
import shutil
|
||||||
import tempfile
|
import tempfile
|
||||||
|
|
||||||
import swiftonfile.swift as sof
|
import swiftonhpss.swift as sof
|
||||||
|
|
||||||
|
|
||||||
class TestPkgInfo(unittest.TestCase):
|
class TestPkgInfo(unittest.TestCase):
|
||||||
"""
|
"""
|
||||||
Tests for swiftonfile.swift PkgInfo class.
|
Tests for swiftonhpss.swift PkgInfo class.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
def test_constructor(self):
|
def test_constructor(self):
|
||||||
|
4
tox.ini
4
tox.ini
@ -38,8 +38,8 @@ commands = bash ./.functests -q
|
|||||||
[testenv:pep8]
|
[testenv:pep8]
|
||||||
changedir = {toxinidir}
|
changedir = {toxinidir}
|
||||||
commands =
|
commands =
|
||||||
flake8 swiftonfile test setup.py
|
flake8 swiftonhpss test setup.py
|
||||||
flake8 --filename=swiftonfile* bin
|
flake8 --filename=swiftonhpss* bin
|
||||||
|
|
||||||
[testenv:venv]
|
[testenv:venv]
|
||||||
changedir = {toxinidir}
|
changedir = {toxinidir}
|
||||||
|
Loading…
Reference in New Issue
Block a user