Soloist OpenVMS Clusters: A New Perspective to Improve Functionality, Flexibility, and Usability
Standalone OpenVMS systems and OpenVMS clusters are generally thought of as distinct configuration alternatives. This distinction has long outlived its usefulness and should be removed. In its place, I propose Soloist OpenVMS Cluster as a more appropriate nomenclature and useful default configuration. A Soloist OpenVMS Cluster is an OpenVMS Cluster containing a single system. This posture should be the preferred configuration for single OpenVMS systems. Such an OpenVMS Cluster may be a trivial instance of an OpenVMS Cluster, but it has significant advantages when viewed from a long-term perspective. This change in posture allows seamless transition of operations to a multi-node OpenVMS Cluster at some future time, on either a temporary or permanent basis.
While they may seem inherent contradictions, both Soloist OpenVMS Clusters and Soloist shadow sets have significant utility both as OpenVMS clusters and shadow sets.
Soloist OpenVMS Clusters reflect a more refined strategic vision of how OpenVMS technologies bring business value to the enterprise. This concept unifies the power of technologies including:
OpenVMS has many advantages when compared to other operating systems. It has been designed to be secure, versatile, and robust. Its design principles are time proven. Its architecture is also based on a consistent set of core concepts. These basic conceptual building blocks ensure that improvements in underlying layers (e.g., RMS, OpenVMS Clusters) are realized by applications without the need for code changes. Often neither re-compilation nor re-linking of executable images is necessary. Changes to underlying facilities are automatically incorporated when images are activated or devices are accessed.
I have been told that this is yesterday's news and that these concepts have been fully exploited. I beg to differ. Soloist OpenVMS Clusters transform features (e.g., OpenVMS Clusters, Host-based Volume Shadowing) from niche capabilities into a strategic foundation for right-sizing capacity and capability over time. The ability to agilely reconfigure capacity without disruption has tremendous business value. In particular, Soloist OpenVMS Clusters are the starting point that allows an OpenVMS-based system to be seamlessly updated and right-sized repeatedly over time, with no user perceived change. This is the sine qua non of “infrastructure computing.”
Presently, Host-Based Volume Shadowing and OpenVMS clusters are licensed both as free-standing products and as part of the Enterprise and Mission Critical licensing ensembles respectively on HP Integrity Servers.
While not well publicized, the OpenVMS Clusters license PAK is not required to configure and operate an individual node as a Soloist OpenVMS Cluster. However, the license PAK is needed to expand the OpenVMS Cluster beyond a single member. Volume Shadowing for OpenVMS presently requires separate licensing, unless it is otherwise already licensed.
The difference between a standalone OpenVMS system and a Soloist OpenVMS Cluster is not a difference in normal operation; the difference appears in terms of time.
Considered over time, this change enables what I shall refer to as “spiraling;” an approach to system implementation that is agile, dynamic, and cost effective. This approach leverages the facilities provided by OpenVMS to improve cost efficiency and uptime. Put simply, spiraling allows a system to evolve seamlessly from test-bed to full production. Scale is immaterial. The test-bed can be a startup in a garage, dining room, or den; production can range anywhere from two small servers to a multi-site OpenVMS Cluster comprising nearly 100 nodes.
The key to spiraling is expanding the strategic use of OpenVMS Clusters, host-based volume shadowing, and networking to transition workload over time. OpenVMS sites that have achieved cluster uptimes of decades have been following this approach for years in one form or another.
Expanding on spiraling, a small change in default configuration options during the OpenVMS installation would also be appropriate. Presently, the default for a virgin installation of OpenVMS asks whether the system will be part of an OpenVMS Cluster. The change in terms of initial configuration is modest. The change revolves around the classic question:
Will this system be a member of an OpenVMS Cluster? (Yes/No)
The default answer is presently “No.” “Yes” would be a better default. For simplification, the entry of the DECnet node address and the SCSID should also be unified. This changes the default basic configuration from a standalone OpenVMS system to a Soloist OpenVMS Cluster.
Though unnamed, these techniques have been part of my consulting practice for many years. Spiraling has also been an unnamed foundational concept in my recent HP Technology presentations. It underlies Evolving OpenVMS Environments: An Exercise In Continuous Computing and Migrating OpenVMS Storage Environments without Interruption or Disruption. It underlies Strategies for Migrating from Alpha and VAX Systems to HP Integrity Servers on OpenVMS as well as the 2007 OpenVMS Technical Journal paper of the same name.
This change in concept requires a small change in OpenVMS licensing. During normal operations, this change does not increase or decrease the functionality of OpenVMS. Thus, there is no downside. At the very worst, it is revenue neutral. Over time, this change provides benefits to users and otherwise untapped revenue potential for HP. Implementation of this small change is straightforward. The single needed change is to allow the use of Volume Shadowing for OpenVMS restricted to single member shadow sets as part of the basic OpenVMS licenses for Alpha and Integrity servers.
This licensing change is not without precedent. Since the beginning, DECnet-VMS has allowed users without specific network license to operate a degenerate network composed of a single node. This network configuration provides users with full access to the DECnet APIs, with the single proviso that all communications were intra-node, not inter-node. While this is somewhat useful to end-users, it is far more useful to OEMs and ISVs. This allows test, development, and small production systems comprised of single nodes, with seamless expansion to multiple nodes possible by simply changing the value of a configuration setting, often a logical name. When implemented using logical names, it is even possible to effect the change without interrupting system operation for a restart.
Downstream, this change dramatically eases hardware transitions. Cold Turkey, Big Bang cut-over risks become artifacts of the past. Hardware transitions take on the same characteristics that are familiar to high availability software upgrades; that of rolling updates to new hardware. This can be facilitated with limited term temporary license PAKs for operation of capabilities beyond those of a soloist.
CPU changes fit this model, whether an upgrade within an architecture or a transition between architectures. In all cases, a Soloist OpenVMS Cluster can be transitioned by temporarily expanding it to a two-node cluster, transitioning the workload, removing the old system from the cluster, and resuming soloist operation. Once the two-node cluster has been once again reduced to a soloist, there is no need for the multi-node cluster support and the temporary PAK is no longer needed. The salient point is that there is no externally visible interruption of service.
Similarly, transitioning to new mass storage becomes a matter of testing the new storage, creating or installing new volumes, adding those volumes to already extant host-based shadow sets, waiting for the shadow copies to complete, and retiring the old volumes from the shadow set. This permits new hardware to be eased into production use on a risk-controlled basis. The process can be paused at any point if problems are encountered. Once all of the volumes have been migrated and the old array disconnected, there is a reduced need for multi-volume shadow sets.
At first glance, Soloist OpenVMS Clusters and Shadow Sets may seem somewhat unusual or unorthodox. That is a false impression. Both OpenVMS Clusters and Shadow Sets have fully supported these configurations since the technologies were first introduced. Everything mentioned earlier in this article is fully in accord with the documentation and the SPDs.
Admittedly most are more familiar with OpenVMS Clusters and Shadow Sets as duets and larger collections. Some may be wish to gain experience and better understand these configurations through personal experience. In the case of OpenVMS Clusters, all that is needed is a test configuration.
Operation of a Soloist OpenVMS Cluster merely requires the basic OpenVMS license. The OpenVMS Cluster license is required to operate an OpenVMS Cluster with more than a single member.
Volume Shadowing for OpenVMS requires a license PAK, even for operation as a soloist. In the workplace, trial license PAKs are available for those who wish to try out this mode of operation. If your firm is a member of HP's Developer & Solution Partner Program (DSPP). If you are an educational institution, Volume Shadowing for OpenVMS is one of the layered products included in the program. Students and faculty can explore the utility of these configurations.
There are also many who for one reason or another do not, or cannot, or cannot explore these concepts in their working lives. The reason may be lack of access to hardware, or a desire to explore and understand these concepts in the privacy of one's study. For these, the OpenVMS Hobbyist license program provides an avenue to increase your understanding of OpenVMS without the support of a commercial sponsor.
For those who lack for hardware at home, or wish to use their next airplane or commuter train ride to explore these concepts, there are also a variety VAX and Alpha system simulations available for personal non- commercial use. These simulations include:
All of these simulations run on common x86-based systems, including notebook computers. An OpenVMS Cluster implementing “spiraling” is a far more supple a tool providing for more functionality than the virtual machine migration options offered in many other contexts.