Multi-Tenancy is the sharing of computing resources between two or more organizations.
There is an ever increasing demand for computing power, and consequently an ever increasing cost for computing resources. This is particularly true for High Performance Computing. A typical high performance computer (HPC), can easily cost four million dollars. Not only is the purchasing cost prohibitive, but siting these large power hungry machines is a challenge. To compound the problem, organizations are protective of their valuable data, and do not want other untrusted organizations on their hardware. So the challenge is to architect a truly secure Multi-Tenant system.
Linux systems are inherently multi-user systems. This is achieved through Discretionary Access Control (DAC). Under Discretionary Access Control the file owner determines who has access and what they are allowed to do. In most cases Discretionary Access is secure enough. But sometimes a computing resource contains information so sensitive that relying on DAC is not good enough.
Typically a highly sensitive endeavor will not share a computing resource. Rather than take the chance of data spillage, a separate resource is purchased. While hardware separation is clearly the safest way to protect data, it is also the most expensive. Even a small HPC can cost a million dollars. Finding a computer facility with enough floor space, coolinghand power to accommodate an HPC can be challenging, and in some cases more expensive then the HPC itself.
The solution in a nutschell is this:
To ensure positive data seperation, assign a unique user account to each compartment of information.
User ID Schema UID Role _________ __________________________ tom Unconfined user tom-proj1 Confined to project1 space tom-proj2 Confined to project2 space
Use diskless workstations to ensure that users can only access one projects' data at a time, assign a project specific filesystem to the users home directory. In this way, project members can work together, but non memebers have no access.
OS Home directory method _______ _____________________ linux Automount /home/<userid> windows Project specific windows share
Every node is a KVM server containing a unique VM for each project/compartment. Every VM has its' own set of NFS mounts. Project specific SLURM banks ensure that the correct set of VMs are used. All VMs would have to run constantly unless appropriate VM management steps are taken. Having more than one VM running would be a resource drain and constitue a significant performance loss. Ideally SLURM software modifications can be made to enable VM management.
The Slurm Workload Manager, formerly known as Simple Linux Utility for Resource Management, or simply Slurm, is a free and open-source job scheduler for Linux and Unix-like kernels, used by many of the world's supercomputers and computer clusters.
Having more than one VM running would be a resource drain and constitue a significant performance loss. Follow this Slurm guide to configure Slurm to work with VMs.