QStar - Technical Specification
QStar - Technical Specification
QStar - Technical Specification
Description
1
Technical
Description
QStar Technologies
QStar Technologies is a market leader in Software Defined Storage (SDS) solutions for data
protection and archiving for the enterprise market with headquarters in the USA.
The SDS solutions developed by QStar are sold through worldwide partner channel (VAR and
System Integrator).
QStar Technologies has developed and implemented a significant number of projects in the field of
data protection and archiving based on tape technology.
Among the many projects carried out we highlight two worthy of notes both for the application
area and for the total installed capacity:
- The Dutch Police, which redesigned its data protection data center based on the QStar
technology for the secure long-term storage (30 years) of sensitive data for a capacity of 300
Petabytes;
- Cambridge University, which uses QStar technology in the context of High Performance
Computing through an archive based on tape library technology with a total capacity of 35
Petabytes.
QStar Technologies is present in Italy with two offices, one in Rome, the other in Milan, the latter
guarantees technical support also for the EMEA market.
2
Technical
Description
Archive Manger
QStar Archive Storage Manager (ASM) manages a range of storage technologies such as Disk
Array, Object Storage, Tape Libraries, Optical Disk Libraries, WORM and Cloud (private and hybrid)
to form an efficient, safe and cost-effective Active Archive environment by virtualizing differing
storage technologies behind a file system. Users see ordinary file shares and can easily search, find
and retrieve data directly from the archive.
QStar Archive Storage Manager (ASM) creates an Active Archive environment as a standard
NAS-based file system using NFS and SMB protocols or S3-based cloud APIs.
Disk Storage, RAID, Tape Libraries (LTFS), Object Storage, Cloud Storage (Public, Private or Hybrid),
WORM storage and Optical Libraries are managed transparently across QStar Active Archive like a
single point mounting file system or Windows folder.
Users see ordinary file shares and can easily search, find and retrieve data directly from the Active
Archive.
This flexible and expandable data repository is accessed natively by Unix/Linux or Windows and is
handled by all existing applications without changing anything, because the pool of storage
resources works just like a standard NAS device. Moreover, the user’s data access experience
remains exactly the same.
3
Technical
Description
Network Migrator
QStar Network Migrator (QNM) is a policy-based tiered storage and data lifecycle manager.
QNM software uses advanced policy management to monitor and automatically migrate, copy or
move less frequently used files from primary storage to tiered storage or to a central archive or
cloud.
The QStar Network Migrator solution allows automatic migration of static files using a
combination of their attributes such as:
4
Technical
Description
The file name remains on the source, where it was generated, while the file’s content is
migrated transparently to the secure archive
By migrating static or less frequently used files to lower cost storage such as Tape Libraries or
Cloud, businesses can optimize the use of primary storage, reducing the need and costs associated
with purchasing more. In addition, when data is managed properly and only the most recent or
frequently changing files are included in the backup process, the backup window will be reduced.
QStar Network Migrator software can be easily installed on a Windows or Linux server. Agents are
available for each server managing data, whether Windows, UNIX, Linux or Mac. QNM also
supports a variety of API sets to integrate with “closed” file systems such as NetApp ONTAP,
Hitachi Vantara HNAS (formerly BlueArc), and solutions based on GPFS, Lustre and HyperFS
(BWStor) performance file systems.
A combination of file and file system attributes can be used to control the movement of data
including: file creation, access or modification date, file extensions, regular expression searches,
and high-water marks. Once defined, data is migrated to the designated storage device and when
archive retention dates have been met, files are released for managed or automatic deletion at
the end of their lifecycle. This intelligent management of data helps organizations meet internal
policies for data governance and regulatory requirements.
5
Technical
Description
Regardless of the physical location of the files, they can remain fully accessible across the network
from their original local file systems. Retrieving the file is as simple as it was before
migration. QStar Network Migrator software can be used on its own or in conjunction with other
QStar products, such as QStar Archive Storage Manager, to store and manage archived data using
Tape Libraries, WORM, Optical, Object Storage or Public, Private and Hybrid Cloud.
6
Technical
Description
API Interface
QStar operates seamlessly with all major UNIX and Windows Platforms using standard file system API (POSIX for Linux
and NTFS for Windows). That allows to run any application without any porting. In addition a full Application
Programming Interface (API) is provided to manage all specific QStar ASM features. A separate API Interface manual is
available upon request from QStar.
7
Technical
Description
Privileged Users
Usually only the System Administrator can run QStar commands because of their administrative capabilities.
QStar creates a special user list, if required, which allows non-root users to run QStar commands and the
Administration Interface. So there is a token verification and authorization dialog to start the GUI.
Remote Administration
The QStar Administration Interface can be used with any 2008/2012/2016/Windows 8/10/ Linux Centos and
Debian 7.x client machine to remotely administer, configure and manage any other server (Windows or UNIX)
that has the QStar Software installed and accessible on the network. The System Administrator simply copies
the Admin.jar and associated files from c:\qstar\bin to the desktop of the client machine to be used as the
remote manager. Executing the Admin.jar will open the GUI, there select the connect option in toolbar and
insert the IP Address of the computer to administrate remotely. The application will create a pop-up window
where the System Administrator must enter user credentials on the remote node and the QStar
Administration Interface will be opened. The majority of the QStar Software functionality is available from a
remote host using the GUI. The QStar Software also provides powerful Command Line Interface (CLI) utilities
that may be used to manage the QStar Software manually or in various scripts. All of the command line
utilities must be used with the exclusive “-H” flag to define on what host the command is being directed to
and executed. For remote administration the software distribution files needs to be installed on the local
computer (no licensing is needed).
VL Scheduler
The QStar ASM Software contains an event scheduler for scheduling certain processes. Scheduling of events
allows the Software to be used to its full potential, notifying the System Administrator of low availability of
media and delaying system intensive processes to be run at off peak times. The VL Scheduler can be used to
initiate archiving from caches independently on a per Integral Volume basis and run batch or single media
erases as well as Copy Media requests. It can also be used to schedule more time consuming tasks out of
peak hours such as volume copies, data Compaction on Integral Volume sets.
8
Technical
Description
To manage the device(s), QStar ASM Software has the following components:
The QSCSI subsystem usually is not used by the end users directly. The commands provided (or GUI
troubleshooting page) are used to trouble shoot device behavior directly on the SCSI level.
QStar ASM supports Oracle ACSLS protocol to manage large Oracle tape libraries, such as SL3000, SL8500.
The media changer operations are initiated though TCP/IP interface while data is written using Fibre Channel
network.
Library Manager
The JB driver, through the “jb” commands, manages the storage libraries called historically “jukebox”. The
JB module performs various operations with the library and its elements (slots, drives, carriers, and
import/export elements).
The commands that can move media between elements are as follows:
An application can control the online/offline status of the elements as well as the library time control
parameters through API calls.
The Library Manager supports mixed generation LTO tape drives and media in the same library and
distinguishes between LTO-1 through LTO-8 and fully manage drives and media ensuring that the only
compatible media will be loaded into the correct drive. Although QStar ASM software does not depend on
media barcodes the presence of the correct barcode indicating media type is used to make decisions about
media type. If there is no barcode the QStar ASM software will determine the media type only after media is
loaded into drive (for example during first “refresh” operation).
9
Technical
Description
View Store
Library management software provides a complete application transparent interface to libraries, scheduling the
insertion of particular media into a drive, based on demand for that volume.
Library Statistics
QStar ASM Software provides statistics for the elements within the library. These statistics allow the System
Administrator to monitor for potential problems with the media and drives within the library. This allows
problems to be rectified before they become too serious. There are also statistics for number of media loads
per slot/surface and per drive. If the library is full and media needs to be taken offline, these statistics will
identify the less frequently accessed media. Additionally the statistics show the number of recovered errors
per surface/drive/carrier, the number of occasions when a slot/drive has been marked as “bad” and the
number of primary and secondary defect blocks on the Optical (MO/UDO/Blu-ray) media. The Statistics view
pane or the jbstatistics command line allows library statistic information to be printed or cleared.
1. - Pioneer Blu-Ray disk verification. The special media type is used to verify media quality and provide
graphical representation of the media status. There are three error rate zones which allow media to
be qualified as “good”, “mediocre” or “bad”.
2. - UDO disks. The UDO disks provide a capability to retrieve replaced block counters during media
formatting and during media usage. Each disk surface provides a limited number of replacement
blocks and once number of available replacement blocks drops below a certain percentage, the
media is declared good, bad or mediocre.
3. - LTO tape media. LTO media provides several capabilities to retrieve media health information from
the media itself. Using those counters the QStar ASM code qualifies media as good, bad or mediocre.
Initial configuration of the devices and automatic path discovery using device serial numbers
10
Technical
Description
Selection of the main path to the device using priority, locality and performance criteria
Automatic path fail-over in case of the main path failure
Periodic path health verification
A set of management interfaces to administer and monitor MPIO devices:
Get MPIO path status
Set device path to online or offline
Define new path to the device
Reassign main path to an alternative ones
Remove path to the device
Set path priority in order to select most optimal path to the device
The ASM MPIO management is implemented in a portable way and it is not dependent on native OS MPIO support.
The ASM MPIO management is available from the GUI or CLI (qmpio command) interfaces.
Allow quick storage library readiness for operations when completely new media is inserted
Improve large storage library management possibilities
By default QStar ASM Software allows the addition (initializing) of media to an Integral Volume set only if it
is erased or blank. Now, if the media is marked as scratch QStar ASM Software will consider the media as
available for initialization but will first check if the media is empty or contains one of QStar's supported file
system and then if is empty will add the media to the Integral Volume set. QStar ASM provides an option in
Media tab on the GUI or the vlscratch command line to declare media type scratch. The media will be
displayed in vllssdev command output or Media tab as “scratch”.
The lowest level of data control is implemented in the tape drives and is based on various checksums of the
tape block data verification. The Qstar ASM software processes errors of such type and attempts to repeat
operation on the block, clean the drive or read media on different drive.
User may configure Logical Block protection on the tape drive to control data transport between computer
and tape drive buffers (See LPB feature description).
User may configure data verification using SCSI WRIRE VERIFY CDB. That may be possible on some disk
drives but may slow down operations.
The TDO media and Cloud format supports checksum (many algorithms supported) calculation on the TDO
object. During read the checksum is verified and operations are repeated up to certain times. If checksum
still fails the media copy may be substituted or object retrieved from the mirror media.
User may configure data digest calculation on per file basis (see Digest Support section).
All those tools and methods may discover media error. In order to provide possibility to retrieve data user
needs to use some method of media copy such as media copy, incremental media copy, media mirroring,
file replication described in the following chapters.
Barcode Support
QStar supports most major manufacturer’s implementation of Barcode for media management. This allows
for improved handling for offline media and disaster recovery. In a large-scale installations, media can be
scanned for barcodes in seconds for fast and efficient media tracking. Media can be tracked by a user defined
label, or its Barcode information, whether inside or outside the library.
For requests that require offline archival media access, the operator is prompted to retrieve the storage
media from its storage location and insert it into the QStar configured storage library.
Data Compaction
The Data Compaction utility is available for use with the SDF/TDO/LTFS file systems, and is used for migrating
live data from one piece of media to the current write surface of the Integral Volume set. The Data
Compaction feature lets the operator fully reclaim media blocks after modifying or removing files on
rewritable Optical media or Tape. This feature is managed under the control of the VL database. This feature
is available on all Integral Volume sets, or just selected ones. Once a piece of media has been compacted and
erased, the System Administrator has the option of adding the erased media to the same Integral Volume set
or erasing and removing it from the Integral Volume set entirely.
Copy Media
The QStar ASM Software can manually or automatically execute a duplication process of the media that has
been completely written. With a correctly configured Integral Volume set, the Automatic Copy Media
command will run every time the Integral Volume set reaches the point of dynamically allocation of the
another piece of media to the Integral Volume set. With the Copy Media the source and destination media
may be selected and copied at the System Administrators convenience. The copied media is an exact
duplicate of the original media in the Integral Volume set. If a media is damaged, the copied media can be
used to replace the damaged media.
Using QStar’s caching technology; the most recently used data is available on the magnetic disk (or other
storage type), providing fast access to recent data for users. Less recently used data is stored on archival
media, via the Volume Librarian (VL) and is automatically moved to the magnetic disk if a user accesses it.
This is a function of the Migration Manager. The moving of data between the cache and the archival media
is transparent to end-users. QStar Software’s automatic storage management provides the benefit of
virtually unlimited storage capacity without sacrificing access time to critical data.
Migration Manager
Migration is the movement of data between the cache partition and the archival media. The Migration
Manager was designed to provide a view of a collection of diverse types of storage media. This includes
13
Technical
Description
magnetic disks, RAID, Optical, CD/DVD, Blu-ray, RDX, Tape drives and libraries. The job of the Migration
Manager is to combine all of these technologies into a Virtual File System called an Integral Volume set. An
Integral Volume set looks and feels to the user like a standard file system on a magnetic disk. This means that
all standard applications, including network-based applications, can work with the Integral Volume set
without modification and in the same manner as they would work with a normal magnetic disk.
In the Magnetic Cache File System (MCFS) module data from different user files are stored in different data
files (extents) in the cache in order to increase in the overall performance of the cache.
The extent size depends on the page size, for example for 256k pages the extent size is 1008 MiB, for 1024k
pages the extent size is 3.94 GiB. So each extent has its own page file. In the current MCFS even in cases of
sequential writing to the user file does not mean writing to sequential pages of the cfs_page. The MCFS cache
design essentially uses native platform file system features to deal with fragmentation and at the same time
allows usage of the native file system tools to defragment a cache directory.
File Migration
File Migration is defined as the movement of data from and to the disk cache and from and to an archive
storage media. This includes archiving files to the secondary storage media and replicating files back to the
magnetic cache to service read requests.
In the case of archiving data in an Integral Volume set from the cache, there are several archiving policies
that can be utilized to maximize the efficiency of the archiving process. Automatic data migration in the
Integral Volume set, or demand archiving forces an archiving cycle when a pre-arranged watermark, High
Primary Capacity, is reached. Archiving can also be started by the System Administrator at any given time or
scheduled in a timed interval using the VL Scheduler.
14
Technical
Description
Retention Period
The Retention Period feature can be used alone or in conjunction with the Grace Period feature. Both Grace
Period and Retention Period start from the last modification time. The Retention period specifies that a file
can be removed only when the Retention period has expired. The benefit of this feature is the ability to lock
a file to read only status for the time the file is required to remain available within corporate guidelines. This
feature goes further than the standard UNIX, or Windows read only flag, as even the System Administrator
cannot remove the read only flag for files under Retention Period management.
Enabling this feature provides a WORM file system, even with rewritable media, which is ideal for corporate
data archiving compliance requirement. Once a file has reached a point in time after the Retention Period,
the file may then be modified or deleted from the file system. If both the Retention and Grace Periods are
specified, a file can only be modified or deleted either before the Grace Period begins or after the Retention
Period expires.
This feature is configurable on an Integral Volume set basis and can be set by seconds, minutes, hours, days
and years. Optionally the file level retention is supported using standard file system operations (by setting
access time in the future and by declaring a file read-only). Such a method is accepted in industry although it
is not a part of the standard file systems features.
15
Technical
Description
Throttling Options
There are two data streams that can be distinguished in the Integral Volume: ingesting data into the cache
from the user application and archiving data from the cache to the backend archival storage. Usually those
two data streams have different performance rates. If archiving is slower than ingesting then the cache will
eventually fill up with Primary data and will be unable to accept new data from the user application until
more data is written to the backend archival storage. In some cases this may cause the application to time
out.
To manage the rate at which the user application sends data to the cache there is a throttling mechanism
embedded into the cache manager. It can slow down ingest rate by applying brakes to data ingestion. In
other words, it delays processing of requests from the user application for several milliseconds thus giving
more time for the archiving process. This kind of balancing allows the cache to be in a state where room for
new data is almost always available. By default, throttling is enabled.
During read operation the ASM Software perform a minimum read request that is equal to one page of the
cache. The file system can be configured to read at least one page or maximum 2GB for each read request.
In case of tape storage it is beneficial to read much more data then requested because tape positioning is
quite length operation (essentially read-ahead).
Do not Keep Full File (-f): File is processed in a page mode. An access to the file causes a single page
containing the referenced bytes to be read into the cache.
Keep Full File (+f): File is processed in a full-file mode. This mode induces a full file read-ahead.
16
Technical
Description
When a user application tries to read bytes from a file, a page containing these bytes is replicated from the
covered file system into the cache. For a file without prefetching (Do Not Keep Full File) enabled that is all
that the read request will do. An attempt to read a file with prefetching enabled induces prefetching or read
ahead. Not only the page that contains the requested bytes is replicated into the cache but other pages are
replicated as well. If the covered file system resides on tape, prefetching (Keep Full File) is enabled by default.
Full Prefetch
Sets prefetching mode to full. That means that at the first read request for a file the whole file will be read
it from tape and replicated into the cache.
Pin to Cache include also the Extensions option that allows to specify a list of file name extensions separated
by a comma that needs to be keep in cache.
DIAGNOSTIC FEATURES
QStar ASM Software provides different diagnostic components that can help the System Administrator to
monitor events, warnings and errors within the system.
17
Technical
Description
E-Mail Notification
The System Administrator can receive e-mail notifications about abnormal events in the QStar ASM Software.
An abnormal events can be, for example, a drive/library failure, an unrecoverable write error, a request for
additional medium (no more space in the Integral Volume set), a request for medium that is currently offline,
etc.
QStar media format manager stores information about the data on the media, thus providing the template
for the data written to the archival media. All archival Integral Volume media sets are self-contained, with
file and directory information, data, and indexes on the same Optical, Blue-ray or Tape media. The volume
format is optimized to ensure maximum performance and transportability between Optical, Blue-ray or Tape
libraries and file servers.
Transportability
Media formats supported by the QStar ASM can be easily moved from one host system to another, regardless
of the manufacturer. Thus, QStar’s media format manager protects the company’s investment in current
hardware and allows access to critical data using other manufacturers’ products.
Flexibility
Media format describes the contents of a single archival media allowing it to be used as part of a logical
group, or Integral Volume media set. As part of an Integral Volume set, file and directory information is not
restricted to a single piece of media; it may span several blue-ray or tapes, giving contiguous space for large
files. This is considered as a multi volume file system.
18
Technical
Description
• Disaster Prevention
Making all archival media self-describing provides the means for disaster prevention and recovery. Any
magnetic disk cache in the storage hierarchy can be completely rebuilt from the archival media, thus
preventing catastrophic data loss.
Such approach allows to support all types of read/write and WORM media and provides possibility to recover
file system to particular point in time. See mount-on-date feature.
If file system database becomes very large, the intervals between backups may be increased. The other
supported approach is to perform incremental database backups in order to minimize space occupied by
the backups themselves.
Media Formats
If user specifies industry standard media format (such as LTFS or UDF) the QStar ASM will write data to
media according to that format.
All QStar vendor-specific archival media are written using self-described format, with file directory
information, data, and indexes on the same optical platter, DVD disk, or tape. This feature provides
recoverability and transportability of the media between storage libraries and file servers. Such approach is
implemented for BluRay, Tape and other media.
19
Technical
Description
This file system can be used to manage data storage and retrieval on Tape media. LTFS is available on both
UNIX and Windows platforms. Files are stored contiguously from the beginning to the end of each piece of
media, with single-seek read and write access. QStar Software fully supports LTFS Version 2.2, data tapes
written in the LTFS Format can be used independently of any external database or storage system allowing
direct access to a file content data and file metadata. This format makes it possible to implement software
that presents a standard file system view of the data stored in the tape media. This file system view makes
accessing files stored on the LTFS formatted media similar to accessing files stored on other forms of storage
media such as disk or removable flash drives. The Linear Tape File System format is an open specification of
the layout of data-structures stored on a sequential-access media. These data-structures hold the file content
data and associated file metadata. Data media, such as LTO data tape, written using this format can be
exchanged between systems that understand the Linear Tape File System format. Software systems that
understand the format can provide users with a file system view of the media. Software systems may
alternatively understand the format only to the degree that allows the system to read data from the media,
20
Technical
Description
or produce a tape that can be accepted by other systems that implement the Linear Tape File System format.
LTFS media created in stand-alone drives can also be imported into the QStar controlled tape library.
The QStar implementation of the LTFS file system supports the three following Interchange Levels:
Direct-single-volume: Single media file system for standalone drive or library without QStar Magnetic
Cache File System (MCFS). This Interchange Level provides direct access to the LTFS tape
Single-volume: Single media file system for standalone drive. This mode used disk cache and may
provide better performance if data from the media is read frequently.
Automount: Single media file system where each media is seen within a single volume in a separate
directory. Automount mode used disk cache.
Spanning: All media are automatically aggregated as large file system. The files may be spanned
between several media which allows to write huge multivolume files (like 64+ TiB).
The LTFS file system format is best suited for large files and in cases when media interchange with other
systems is required.
ADVANCED FEATURES
Beside the standard ASM Software the System Administrator can install the following Advanced Software
Packages:
different storage technologies. The RPL migrator may replicate data on remote nodes and therefore provide
base disaster recovery. The file on remote site may be provided for read-only mode to increase overall
performance.
First, Create a regular Integral Volume (for example TDO) on the backend-remote-host
Second, assign a cache to that Integral Volume set. That cache will be used by the migrator for storing
the migrator database but will not be used by the cache manager.
Third, create a proxy Integral Volume set on the cache-host using the GUI or the vlcrset command
(for example vlcrset -T proxy -h target_host_name -s target_set_name proxy_set_name)
The target_host_name is the host name of backend-remote-host and the target_set_name is the
name of the Integral Volume set on the backend-remote-host. Both -h and -s options must be
present.
Fourth, assign a cache to the Proxy Integral Volume set. That cache will be used by the cache manager
and will not be used by the migrator.
After these actions are done the Proxy Integral Volume set is ready for use. A Proxy Integral Volume set can
be mounted like any other Integral Volume set. After it is mounted, on the cache-host two software modules
are working: the cache manager and the proxy migrator. On the backend-remote-host only the migrator is
working for the target Integral Volume set. The proxy migrator operates as an intermediate between two
hosts, it passes requests from the cache manager to the migrator on the backend-remote-host and passes
replies from the target migrator back to the cache manager on the cache-host.
QStar S3 Service user management allows the set and check user’s permissions on requested operations for
QStar’s S3 service and provides internal mechanisms for user management.
All user management data is stored in a local SQLite database. Passwords are stored in encrypted form in the
database. All user management features can be exported and are available via separate command line tools
which will send these commands to the QStar S3 service using https protocol. The QStar S3 service uses a
prepared users/groups database in order to provide user permissions check during S3 requests execution.
The Data Digest Algorithm available for the Integral Volume set are
SHA1, SHA256 and SHA512, the SHA512 is the most secure digest but requires more CPU power than using
SHA1.
Special commands are used to verify the integrity of the files already archived. This is done by recalculating
the digest and comparing it to that stored in the QStar stream at the time of the archiving. Please call QStar
Support personnel for more detailed information.
QMS utility
The QMS utility is used in cases when archive file system is used as a temporary data vault. For example, the
video surveillance files are relevant for 1 year but after that it is desirable to remove video files in order to
reuse removable media.
In order for QMS to operate the Integral Volume Set needs to have a Retention time defined. The QMS should
be started for periodic execution and will go through the file system and find files for which retention time is
expired. It will removed expired files and will attempt to reuse media (if the Integral Volume is media based).
23
Technical
Description
The media will be erased and added back to the Integral Volume. It is also possible just to list the files that
have met retention so they can be checked if needed.
TMT Utility
The Technology Migration Tool (TMT) utility is provided to optimize file migration from older tape media or
disk generations to a new ones. The TMT utility creates a list of files in a particular media and copies files to
the Integral Volume cache. The TMT provides file sorting using file extent location on the source media to
maintain sequential tape movement. This allows to achieve highest possible performance when moving
files from old tapes to the new ones. The TMT utility may be used to copy files in selected directories also
using sorting capabilities.
Cache Booster
The cache on the hard disk is used to accept new data to the Integral Volume and to provide data to be
migrated to the storage devices (tapes, clouds, etc). The storage devices are running with increasingly higher
speeds. For example, LTO-7/LTO-8 provides up to 750 MB/sec performance and in order to support several
tape drives the disk should sustain 2-3 GB/sec performance to be able to satisfy all data paths. Such disk
(RAID) systems exist but they may be quite expensive.
The cache booster feature allows to create separate cache area on a very fast (but maybe limited in capacity)
devices. For example, cache booster may be implemented on RAM disk using computer memory as a cache
or on fast SSD or flash memory disks.
Cache booster may be configured for write operations only (to support fast writes), read operations only (to
support intense read operations) or for both read and write operations. In any case the hard disk cache still
needs to be configured because it contains cache and migrator databases. If cache booster is not configured
for read operations the read data will be placed on the hard disk cache for repeated file reads.
Using cache booster it is possible to provide high performance pass-through write to the archive devices and
at the same time to use hard disk to keep replicated data.
24
Technical
Description
The GAS cluster consists of many (up to 64) computer nodes and each of the nodes provide the possibility to
expose a global namespace to the users. The files are created on the node (and storage device) where data
is received but the node provides metadata information about files to other nodes. If a file needs to be
retrieved by the node which does not have file data the GAS will transfer that data to the requesting node
cache for user to access.
The optional feature (Q1 2020) will provide the possibility to replicate data from one node to other one (up
to 4 replicas) to provide access to files in case of a node (or communication) failure.
Therefore QStar Technologies, Inc. developed Global ArchiveSpace architecture (GAS) where performance
of the system may be scaled by adding additional nodes. In order to manage storage library resources
among nodes there shall be some architecture to manage slots and drives in a multinode system (GAS
domain).
Such management is defined in a shared library approach, where one node (possibly clustered for higher
availability) is assigned to manage a physical library (load/unload media into drives, perform import/export
of the media, etc.). The other nodes are acting as a shared library clients. For each shared library client a
certain number of library slots and drives are assigned and client can use only those resources. The shared
library architecture assumes that there is high performance connectivity fabric, such as Fibre Channel (FC)
in order to provide client access to the tape drives. (Q4 2019)
System Security
The Qstar ASM provides several access security levels. The security level 0 (should be used in physically
secure environments) used not encrypted communication between nodes. The security level 1 uses TLS 1.2
encryption standard when communication between nodes. That is enforced for GUI and CLI remote
operations and requires explicit login which is valid for configurable time period.
The security level 2 builds on the security level 1 and requires user login even for local node where Qstar
ASM is running. In addition the level 2 logs all user accesses to the system for later analysis.
QStar ASM uses OS native authentication procedures (like Windows User accounts, Active Domain or Linux
NIS+). This approach allows to simplify user management and avoid introduction of vendor specific
authentication schemas.
The user authorization is managed by the QStar ASM using by default three user roles: Administrator,
Operator and Service.
All QStar ASM API calls are checked for proper authorization and unauthorized operations will be rejected.
IPV6 Support
QStar ASM fully support IPv4, IPv6 or a mixture of both protocols.
25