Revised August 2009 SDSC Datacenter Reference Guide I. PURPOSE: The purpose of

Revised August 2009 SDSC Datacenter Reference Guide I. PURPOSE: The purpose of this document is to serve as reference material for campus personnel and customers participating in co‐location at the San Diego Supercomputer Center. Revisions to this document will be made as necessary to maintain its accuracy. II. TECHNICAL STANDARDS: SDSC Data Center Attributes: • 19,000 sq. ft. of space • 4 Megawatts of current datacenter power, with on‐site capacity of 13 Megawatts o 208V distribution currently available o 480V available Equipment Racks: The racks to be used and provided in the SDSC datacenters are APC AR3100s in the West Datacenter and Liebert/Knurr Miracel 600mm width, 1200mm depth, 42U racks in the East Datacenter. All racks are labeled front and rear at the top with their room coordinates. The racks will include overhead cable troughs and thermal containment systems. • http://www.apc.com/resource/include/techspec_index.cfm?base_sku=AR3100 • http://www.knuerr.com/web/zip‐pdf/en/IT‐Special‐Catalog/IT‐Special‐Catalog‐Miracel.pdf Networking: The design of the SDSC colocation network fabric provides a stable, flexible infrastructure that has high performance and reliability. All backbone trunks currently utilize 10g links, in some cases using more than one bonded together for increased throughput. The Layer 2/3 Colo fabric will include Juniper core and Arista aggregation switches. The design will support thousands of 1G ports and hundreds of 10g ports with hundreds of vlans available. Vlans will conveniently allow a virtual space in the network devoted to a customer’s IP address space. Customers may require local data and management vlans assigned to a customer going to the same or different racks. This will allow several different options depending how a customer wants to use their rack space. Below is the agreed upon SDSC network infrastructure to support scalable colocation: Revised August 2009 At a rack level, there are three standard connectivity options: • In‐rack 48 or 24 port Juniper EX 4200 switches: o The switch is recharged to the campus up‐front with an annual maintenance cost charged for maintenance and the depreciation of the upstream fabric. o Hosts are connected to one or two 1G copper ports or to a 10Gb port on the 4200 switch. o Up to ten EX 4200 switches share trunks back into the fabric using a redundant 128Gb ring architecture that allow all ten switches to act as one switch from a management and performance perspective. • 10Gb links directly into the aggregation switch fabric: o For higher‐bandwidth needs or other special circumstances, 10Gb links can be provided into the Arista aggregation switch layer of the network fabric. o These 10Gb ports are leased on an annual basis, recouping the depreciation costs of their share of the upstream fabric. • 10Gb links directly into the core switch fabric: o For certain special circumstances, 10Gb links can be provided directly into the Juniper core switch layer of the network fabric. o These 10Gb ports are leased on an annual basis, recouping the depreciation costs of their share of the upstream fabric. Revised August 2009 Additional special configurations are available on a case by case basis working with the networking groups of the campus and SDSC. These might include the capability for customers to have unique BGP peer point, dedicated 10G L2/L3 path(s) to CENIC, or other WAN networking needs. If using SDSC address space, IP addressing is managed by SDSC’s network group. Each campus would be provided subnets and VLANs as needed to support their networking needs, with the allocated amount of IPs charged for at current rates. DNS services within SDSC’s address space are included in the per‐IP costs. In some (but not all) cases, DNS services may be able to be delegated to the participating campus. Alternately, campus‐delegated address space would be completely managed by the participating campus and not incur any per‐IP costs. Additional services include: • Time supplied by SDSC servers. • SNMP stats on a colocation web server of VLAN usage or port usage at the head switch in a rack. • Performance testing NDT servers online for local or remote testing of bandwidth. • Troubleshooting/configurations assistance at current hourly rates. Prohibited activities include: • NAT IP addressing for data access (VPN access is permitted). • Span or mirror ports in the fabric switches without coordination with SDSC networking. • Excessive performance testing through fabric. Rack Power: • SDSC will provide two NEMA L6‐30, 30 Amp, 208v power connections for all racks. If additional power is required, alternate arrangements should be made with SDSC. • Conditioned power could be available but is not guaranteed. Seismic Protection: • All equipment (i.e. racks, mainframes, clusters, etc.) placed in the SDSC Data Center will need to be installed on top of Iso‐Base seismic platforms. This is an additional level of protection in the event of an earthquake or other major catastrophe. o http://www.worksafetech.com/pages/isobase.html Aisle Containment: • SDSC utilizes aisle containment to maximize life and efficiency of Datacenter equipment. Equipment hosted at the SDSC Datacenter must have an air flow orientation from front to back. III. DATACENTER INFRASTRUCTURE: HVAC: Revised August 2009 • West Datacenter: o The San Diego Supercomputer Center utilizes Liebert air handling units for Datacenter cooling. In the West Datacenter there are multiple 30 and 40 ton cooling units placed throughout the room to deliver cold air. Delivery of air is through perforated floor tiles through the 2 foot raised floor. The ceiling above acts as the return plenum, sending the exhausted hot air back to the CRAC (computer room air conditioning) units. The CRAC units are supplied chilled water by the UCSD central pump station. In the event this supply is interrupted, SDSC has a local chilled water loop that can supply supplemental cooling during the outage. • East Datacenter: o The air handling units are underneath the floor. The supply plenum is beneath the floor while the return plenum is about the ceiling like the West Datacenter. The ambient room temperature is controlled by a Johnson Controls system. Interaction with this system is performed inside the SDSC NOC. Fire Suppression: • Detection: o The detection system is a combination of in room and under floor particle detection devices. • Water Suppression: o Like most building facilities the SDSC Datacenter is equipped with a wet fire extinction system. It is a “dry” system, meaning the pipes are not filled with water until initial particle detection. It takes a minimum of two separate detections to energize this system. Actual discharge will take place if the wicks located at the nozzles reach a critical temperature. o A false positive can occur if one of the nozzles recessed in the ceiling is damaged or tampered with. This will fill the system with water but not lead to a discharge. Safety guidelines should be followed all times in the SDSC Datacenter. • West Datacenter Clean Agent (Halon): o The detectors are similar to the ones used in the water system however they are underneath the floor. The particle detection underneath the floor is possible because the particulates would be cycled through the room by the CRAC units. Two detections are required to discharge the system. o There is an abort button physically located in the Datacenter that can prevent discharge if pressed before the second detection. • East Datacenter HFC‐125 (FM‐200 Clean Agent type): o The SDSC Datacenter expansion utilizes this type of system. Detection devices are mounted to the ceiling and the aisle containment but not underneath the floor. All other operations of this system are similar to the Halon system. Like the Halon system, there is an abort button located near the main entrance to the Datacenter next to the EPO button. Revised August 2009 • Once the fire suppression system energizes (either water or Halon), the room will EPO (emergency power off) all equipment except for the lights (for safety purposes). There are 6 EPO buttons (4 in the existing Datacenter and 2 more in the expansion) throughout the Datacenter that can also be manually pushed in the event of an emergency. Power & Electricity: • Utility Supply: o SDSC currently has two utility providers that supply electricity to the building and Datacenter. With the completion of a current electrical upgrade, these systems total ~12000 kVa. • Distribution: o SDSC has multiple transformers and power distribution units (PDU) that will supply ~10 megawatts of power to the Datacenter. Distribution voltages range from 110 to 480. • Uninterruptable Power Supply (UPS): o The SDSC Datacenter has a combination of MGE and Mitsubishi UPS systems. These UPS’ primarily keep core infrastructure and critical systems running in the event of a power compromise or loss. These critical systems include but are not limited to the SDSC NOC, and the San Diego Network Access Point (SDNAP) • Generator & Catastrophe Support: o There are two generators that supply the Datacenter with supplemental power in the event of utility loss. One of these generators supports the SDNAP. The other is shared with a nearby facility but does supply a substantial amount of energy to the Datacenter expansion. Both of these generators run on diesel fuel and are checked by uploads/Ingenierie_Lourd/ datacenter-ref-guide.pdf

  • 16
  • 0
  • 0
Afficher les détails des licences
Licence et utilisation
Gratuit pour un usage personnel Attribution requise
Partager