Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Using their Grace login information in the form of username@hpc.uams.edu, a user may access the ECS management GUI via a web browser, where the user has limited rights as a Namespace Administrator to manage their personal Namespace (e.g. create or replace access keys, create/delete Buckets, set Bucket attributes, set up NFS or HDFS shares on buckets, etc.).  Please note that this GUI allows you to fetch your secret key for accessing S3, or to set your Swift password.

Departments, labs, or other entities may arrange to have shared Namespaces in the object store.  The department, lab, or other entity designates one or more persons to manage that Namespace.  These departmental Namespace Administrators would access the ECS management GUI  using their UAMS credentials in the form of "username@uams.edu" (i.e. not their HPC credentials).  Note that each UAMS user can only be the Namespace Administrator of one shared Namespace (not counting the user's personal Namespace tied to their HPC account on Grace).  There is a workaround for this rule, but it requires multiple IDs, one for each Namespace.  From the management GUI the Namespace Administrators can either add ACLs to Buckets or Objects to allow users to access data in the Namespace, or can create local Object Users (Object Users who are not tied to any particular domain) within the Namespace.

...

While s3curl is a commonly used tool, it does not offer great performance for accessing the object store.  Access to the object store in is parallel in nature.  Many other tools leverage that inherent parallelism to improve performance.  In our experience, the open-source rclone tool is one of the easiest to use, with its rsync-like syntax.  The rclone tool also allow for mounting a bucket as if it were a directory-oriented file system, similar to an NFS or network share mount.  Early benchmarking indicates that rclone seems to have better performance than NFS-mounting a bucket directly, particularly for reads. 

An even faster tool, according to our early benchmarking, is ecs-sync.   The ecs-sync tool offers the best performance of any tool that we have tried thus far.  However, it is a bit more difficult to use.  We have set it up on a DTN node, to facilitate moving data between ROSS and Grace .  Similar to the HPC, it utilizes a "job" concept, where one submits a data transfer job, which then runs in the background.  The ecs-sync tool has facilities for monitoring the performance of the job, allowing a user to change the number of parallel threads to tweak that perform, and to gauge progress.over higher bandwidth links, for better performance.  

There are a lot of other options for accessing the object store, including both free and paid software.  UAMS IT has a limited number of licenses for S3 Browser, which is a GUI-based Windows tool for access ROSS.  We do not endorse any particular tool.  Users may pick the tool that they feel most comfortable with. 

An Object User can also set up a Bucket with File Access enabled at Bucket creation time.  Then, using the ECS management GUI, the Namespace Administrator for the Namespace in which the Bucket resides can set up, for example, an NFS share using that Bucket along with user ID mapping.  Users can then mount and access the bucket using the NFS protocol, similar to any other network file share.  Enabling a bucket for file access does seem to add a tiny bit of overhead to that bucket.  And accessing a bucket via NFS is slower than accessing using the object protocols (e.g. S3 or Swift).  If an Object User needs file access, they might consider using rclone mounting instead of the NFS mounting, as it might be faster.

If a user desires high speed NFS storage, UAMS IT offers at cost a research NAS that the researcher may use.  Please contact UAMS IT directly if you would like Research NAS storage.

...