A Shared Resource
ACRES is a shared compute cluster available for use by all Clarkson faculty members and their research teams to support sponsored research. ACRES is open to the Clarkson community as a computing resource to support departmental or sponsored research and limited academic coursework, thus a faculty member’s sponsorship is required for all user accounts.
Please note that your use of this system falls under the “Information Technology Acceptable Use Policy“, as described in the Clarkson Operations Manual. In particular, sharing authentication credentials is strictly prohibited. Violation of this policy will result in termination of access to ACRES.
The initial purchase of ACRES has been supported with seed funding from the National Science Foundation under Grant No. 1925596. It comprises a set of freely available compute nodes, a few specific resources such as large-memory nodes and GPU nodes, as well as associated networking equipment and storage. These resources can be used to run computational codes and programs, and are managed through a job scheduler using a fair access queueing policy.
What is a cluster?
A computing cluster is a federation of multiple compute nodes (independent computers), most commonly linked together through a high-performance interconnect network.
What makes it a “super-computer” is the ability for a program to address resources (such as memory, CPU cores) located in different compute nodes, through the high-performance interconnect network.
On a computing cluster, users typically connect to login nodes, using a secure remote login protocol such as SSH. Unlike in traditional interactive environments, users then need to prepare compute jobs to submit to a resource scheduler. Based on a set of rules and limits, the scheduler will then try to match the jobs’ resource requirements with available resources such as CPUs, memory, or computing accelerators such as GPUs. It will then execute the user defined tasks on the selected resources and generate output files in one of the different storage locations available on the cluster for the user to review and analyze.
The condominium model
For users who need more than casual access to a shared computing environment, OIT also offers faculty members the possibility to purchase additional dedicated resources to augment ACRES by becoming ACRES owners. Choosing from a standard set of server configurations supported by OIT staff (known as the ACRES catalog), principal investigators (PIs) can purchase their own servers to add to the cluster.
It is anticipated that when fully operational the vast majority of ACRES’s compute nodes will be owners nodes, and PI purchases will be the main driver behind the expansion of the cluster.
This model, often referred to as the condo model, allows ACRES owners to benefit from the scale of the cluster and give them access to more compute nodes than their individual purchase. This provides owners with much greater flexibility than owning a standalone cluster.
The resource scheduler configuration works like this:
- owners and their research teams have priority use of the resources they purchase,
- when those resources are idle, other owners can use them,
- when there are no Clarkson jobs waiting in the queue, idle cycles will be used to run jobs from the Open Science Grid.
This provides a way to get more resources to run less important jobs in the background, while making sure that an owner always gets access to his/her own nodes. Participating owners also have shared access to the original base ACRES nodes, along with everyone else.