North Hall Data Center News - Summer 2018

Printer-friendly version
July 23, 2018

Greetings North Hall Data Center Customers,

We hope your Summer is going well. There are a few action items at the NHDC geared toward improving our services to share with you via this update. Also, if you haven’t been to the NHDC in a while, you might enjoy some of the pictures, stats, and info that are in the presentation prepared for the Cyberinfrastructure Committee at the link below.

https://tinyurl.com/NHDC-CyberInfPres-2018

If you have any questions, feel free to drop us an email at ets-nhdc@ucsb.edu or call me at x7960. Thank you for the trust you place in us by hosting your systems at the North Hall Data Center.

Kirk Grier

On behalf of the ETS NHDC management team

 

Main UPS Replacement - UPS1 and UPS2

Our (2) Liebert 225kVA (180kW) UPS units are beyond end-of-life at 18 years old, and one has failed. The new APC Galaxy VM 225kVA units are considerably more efficient (98% vs. 80%), will have a ten-year service life, and save at least $40k per year in operating costs over the existing units. The project cost is approximately $355k and is underway now with expected completion August/September 2018.

 

Supplemental UPS Addition - UPS3

Our existing power distribution for UPS1 and UPS2 limit their capacity. To provide additional capacity and redundancy, a third UPS will be deployed on the data center floor. This APC Symmetra PX UPS will scale from 100kW to 250kW as future needs dictate. Initially, it can provide redundant, high availability power to critical systems or additional capacity as needed. The project cost is approximately $90k and is underway now with expected completion in July/August 2018.

 

Row 5 Build - 13 racks

Since the Thomas Fire and Montecito Debris Flow customer intake at NHDC has increased. We have no unoccupied racks available, so we started planning to deploy row 5 as a ten rack row. Row 5 has now expanded to 13 racks, and is over 75% allocated! Five racks are designated High Availability, which will have redundant networking and triple UPS service. Row 5 will be our first row with 40Gb/s redundant connectivity to NHDC core networking. The project cost is approximately $239k and is underway now with expected completion in July/August 2018.

 

SIST Production move from SAASB Data Center to NHDC

Once NHDC’s Row 5 is nominally available, Student Information Systems & Technology (SIS&T) will be moving their production services from SAASB to NHDC. Initially, SIS&T will prototype functions in parallel with NHDC’s Row 5 and UPS projects, though production cutover will need to wait until potentially disruptive work at NHDC is complete. SIS&T expects to complete their move this Summer, in time for classes starting in the Fall.

 

Remote Colocation at CDT Sacramento

One of our goals is having remote, off-campus colocation for hosting critical systems as virtual machines under VMware, as well as the option to physically host additional mission-critical hardware. The California Department of Technology’s (CDT) “Gold Camp” Data Center in Rancho Cordova will provide essential colocation services for our equipment, CENIC connectivity, and a menu of additional value-added services to choose from should we need them. UCSB services to be hosted at CDT will be defined by campus business needs and managed as part of the NHDC offering. Projected costs are $25/rack/year plus the hardware and software we deploy there. We expect to have a footprint at CDT operational in late Fall 2018.

 

Row 4 Build - 7 Racks - but maybe not!

If our present intake trend continues, row 5 will fill by year end. We will then need to start provisioning row 4, which is limited to 7 racks, as row 3 will be. Before then we need to look at the generations of hardware we are hosting at NHDC. We have been open since Spring 2012. While much equipment has entered, little has left. NHDC shouldn’t become a parking spot for older, obsolete, inefficient systems. We need to craft services that can be shared, so customers do not need to own commodity computing hardware at all. Much like they don’t need to own and maintain a server room now :)

 

Last and Far from Least - Critical Cooling at Capacity

Critical Cooling is when NHDC is operating on the two electrical chillers present at NHDC, not on the campus Chilled Water Loop. We have a maximum of 60 tons (210kW) of cooling available via Critical Cooling. Today we are using all of that, and a bit more, when everything currently hosted at NHDC is powered on.

We anticipated this day - some of you may recall our CRITICAL1, CRITICAL2, and NON-CRITICAL equipment classifications - where we would need to triage what systems are powered down to allow us to reduce the load. During NHDC’s design, research cluster systems were defined as NON-CRITICAL systems. So we will be reaching out to the admins of cluster systems to coordinate signaling and shutdown methods for implementation when we transition to Critical Cooling.

Many years ago the old Computer Center Machine Room got there with the CNSI cluster, which implemented the shutdown internally, so there are many ways to do this. Today we should be able to shut down and restart systems with automation reliably. Also, if you have non-cluster workloads on your cluster, please let us know when we talk. Those might be candidates for moving to one of our shared Core IT Services platforms.