Sunday, March 23, 2014

Storage: more capacity calculations

Following on from the previous post on Efficiency and Capacity, baselining "A pile of Disks" as "100% efficient".

Some additional considerations:
  • Cooling.
    • Drives can't be abutted, there must be a gap for cooling air to circulate.
    • Backblaze allow 15x25.4mm drives across a 17.125in chassis, taking around 12%, 52.5mm, of space for cooling.
      • This figure is used to calculate a per-row capacity, below.
  • A good metric on capacity is "Equivalent 2.5 in drive platters", table below.
    • 9.5mmx2.5" drives with 3 platters, yield highest capacity.
    • Better than 5 platter, 3.5" drives.
    • Drive price-per-GB is still higher for 2.5" drives.
      • This may change in time if new technologies, such as "Shingled Writes" are introduced into 2.5" drive fabrication lines and not into 3.5" lines.
  • Existing high density solutions, measured in "Drives per Rack Unit". SGI "Modular" units, which also support 4xCPU's per 4RU chassis, are the most dense storage current available.
    • Backblaze achieve the lowest DIY cost/GB known:
      • 4RU, vertical orientation, 15 drives across, in 3 rows.
      • fixed drives.
      • 45x3.5" drives = 11.25x3.5" drives per RU
      • 450x3.5" drives per 42RU
    • Supermicro 'cloud' server (below) achieves 12 drives, fixed, in 1RU
      • 12x3.5" drives per RU
      • 504x3.5" drives per 42RU
    • Supermicro High Availability server, supports 36x3.5" removable drives in 4RU
      • 9x3.5" drives per RU
      • 360x3.5" drives per 42RU
      • An alternative 2.5" drive unit puts 24x2.5" removable drives in 2RU,
      • = 8.5x3.5" drives per RU
      • 356 drives per 42RU
    • Open Compute's Open Rack/Open Vault use of 21", not 19" racks still in 30" floor space, allows higher disk densities:
      • 30x3.5" drives in 2RU
      • 15x3.5" drives per RU
      • 630x3.5" drives per 42RU
    • SGI Modular InfiniteStorage uses modules of 3x3 3.5" drives, 3x3 2.5" 15mm and 3x6 9.5mmx2.5" drives in a 4RU chassis. Drives are mounted vertically.
      • Modules are accessible from front and rear.
      • All modules are accessible externally and are removable.
      • 81x3.5"drives per extension case, 72x3.5" drives per main chassis, 4 expansion cases per main chassis.
      • 720 to 792x3.5" drives per 42RU (same for 2.5" 15mm drives)
      • 1140 to 1584x 2.5" 9.5mm drives per 42RU
    • SGI/COPAN "MAID" uses "Patented Canisters" to store 14x3.5" drives back-back per canister. 8 canisters per 4RU drive shelf, 112x3.5" drives per shelf. These devices no longer appear on the SGI website, though have featured in a Press Release.
      • MAID attempts to reduce power consumption by limiting active drives to at most half installed drives.
      • Up to 8 shelves per 42RU unit.
      • Power, CPU's and 
      • 21.33x3.5" drives per RU (28x3.5" drives per RU per shelf)
      • 896x3.5" drives per 42RU
    • EMC Isilon S200, X200 Nodes [2011 figures] are 2RU units
      • EMC support 144 Nodes per cluster
      • 24x2.5" drives and 12x3.5" drives respectively
      • 12x2.5" drives per RU and 6x3.5" drives per RU respectively
      • 504x2.5" and 252x3.5" drives per 42RU
      • 5.5 racks to support maximum 144 node cluster [unchecked for 2014 config]

In the previous piece, I said there were just 4 'interesting' drive orientations of 6 possible, due to "flat plate" blocking of airflow.

If you include a constraint for uninterrupted front-back airflow, there are only two good orientations:
  • the drive connectors, on the shortest side, have to be to one-side (bottom or left/right)
    • vertical, thin-side forward, 100mm high  x thickness (5mm-15mm) width
      • allows many drives across the rack (table below)
      • stacked drives take 75mm depth. Allows 6 in 450mm. 900mm deep possible
    • horizontal, thin-side forward, thickness (5mm-15mm) high x 100mm wide
      • allows 4 drives across Rack
      • stack drives vertically with small separation.
  • drive connectors in-line with airflow will restrict it, eliminating horizontal & vertical end-to-end.



Table of 2.5" Platter equivalent across 19" Rack

Rack Width: 435mm (17.125in, allows for sliders)
Interdrive space (cooling): 52.5mm
Usable space: 382.5mm

Drive
thickness
Count
across
Rack
Platters
25.4mm15106 (75 actual)
15mm25100
9.5mm40120
7mm54108
5mm7676



Backblaze V4.

http://www.zdnet.com/buy-a-180tb-array-for-6gb-7000027459/
http://blog.backblaze.com/2014/03/19/backblaze-storage-pod-4/
http://www.highpoint-tech.com/USA_new/series_R750-Overview.htm

$688ea for the new SAS/SATA cards (to Backblaze in 100 Qty):

"The Rocket 750's revolutionary HBA architecture allows each of the 10 Mini-SAS ports to support up to four SATA hard drives: A single Rocket 750 is capable of supporting up to 40 4TB 6Gb/s SATA disks,"

$3,387.28 full chassis
$9,305 total 180TB [$131/drive, $5917.72 total]

$5,403 ‘Storinator’ by Protocase.
$7,200 $160 per 4TB drive is $7,200
$12,603 Protocase + drives

Parts
$872.00 case
$355.99 PSU
~$360 motherboard, CPU, RAM
$1,376.40 SATA cards (2)



From SuperMicro: scale-out storage products already, mostly 3.5", but some 2.5"

http://www.supermicro.com/products/rack/scale-out_storage.cfm
- 360 3.5” drives in 42RU. 4Ux36-bay SSG

And for ‘hardoop’, they do a little denser.
http://www.supermicro.com/products/rack/hadoop.cfm

Supermicro have multiple innovative designs [below], with 9-12x3.5” drives/RU, 12x2.5” drives/RU and their microblande & microclould servers with proprietary motherboards & high-bandwidth.

e.g. hardoop, 1RU, fixed:

12x3.5” in a 1RU rack. 43mm x 437mm x 908mm (H, W, D)
- 2 full length columns (3 drives, fans, 2 drives)
- 1 short column (fans, 2 drives)
- PSU, m’board, Addon-card (PCI on riser) and front panel on one side of chassis
- AddOnCard w/ 8x LSI 2308 SAS2 ports and 4x SATA2/3 ports
- dual 1Gbps ethernet
- m’board 216mm x 330mm, LGA 1155/Socket H2, 4xDDR3 slots
- 650W
http://www.supermicro.com/products/system/1U/5017/SSG-5017R-iHDP.cfm?parts=SHOW
http://www.supermicro.com/products/motherboard/Xeon/C202_C204/X9SCFF-F.cfm
http://www.supermicro.com/products/accessories/addon/AOC-USAS2-L8i.cfm?TYP=I

Front and back Hot-swap, 4RU, 36 drives:

- 178mm x 437mm x 699mm (H, W, D)
- dual CPU, 4x1Gbps ethernet
- 2x1280W Redundant Power Supplies
- 24xDDR3 slots
- LSI 2108 SAS2 RAID AOC (BBU optional), Hardware RAID 0, 1, 5, 6, 10, 50, 60
- 2x JBOD Expansion Ports
- BPN-SAS2-826EL1 826 backplane with single LSI SAS2X28 expander chip
- BPN-SAS2-846EL1 Backplane supports upto 24 SAS/SATA

http://www.supermicro.com/products/system/4U/6047/SSG-6047R-E1R36N.cfm
http://www.supermicro.com/products/motherboard/Xeon/C600/X9DRi-LN4F_.cfm

Alt. system, 2RU, 24x2.5” hot-plug:

- 89mm x 437mm x 630mm (H, W, D)
- 12Gbps SAS 3.0
- no CPU specified.
http://www.supermicro.com/products/chassis/2U/216/SC216BE1C-R920LP.cfm



Open Vault/Open Rack.

http://www.opencompute.org/projects/open-vault-storage/
http://www.opencompute.org/projects/open-rack/

The Open Vault is a simple and cost-effective storage solution with a modular I/O topology that’s built for the Open Rack.
The Open Vault offers high disk densities, holding 30 drives in a 2U chassis, and can operate with almost any host server.
Its innovative, expandable design puts serviceability first, with easy drive replacement no matter the mounting height.

Open Rack
http://en.wikipedia.org/wiki/19-inch_rack#Open_Rack

Open Rack is a mounting system designed by Facebook's Open Compute Project that has the same outside dimensions as typical 19-inch racks (e.g. 600 mm width), but supports wider equipment modules of 537 mm or about 21 inches.



SGI® Modular InfiniteStorage™

http://www.sgi.com/products/storage/modular/index.html
http://www.sgi.com/products/storage/modular/server.html
http://www.sgi.com/products/storage/modular/jbod.html

Image: http://www.sgi.com/products/storage/images/mis_jbod.jpg [whole unit]
Image: http://www.sgi.com/products/storage/images/mis_brick.jpg [module: 3x3, vertical mount]

Extreme density is achieved with the introduction of modular drive bricks that can be loaded with either nine 3.5 inch SAS or SATA drives, or 18 2.5 inch SAS or SSD drives.

SGI® Modular InfiniteStorage™ JBOD

(SGI MIS JBOD) is a high-density expansion storage platform, designed for maximum flexibility and the ability to be tuned to specific customer requirements.
Whether as a standalone dense JBOD solution, or combined with SGI Modular InfiniteStorage Server (SGI MIS Server), SGI MIS JBOD provides unparalleled versatility for IT managers while also dramatically reducing the amount of valuable datacenter real estate required to accommodate rapidly-expanding data needs.

Up to 81 3.5" or 2.5" Drives in 4U
up to 3.2PB of disk capacity can be supplied within a single 19" rack footprint.

SGI MIS JBOD shares the same innovative dense design with SGI MIS Server, which can be configured with up to
81 3.5" or 2.5" SAS, SATA SSD drives.
This enables SGI MIS JBOD to have up to 324TB in 4U.

SGI MIS JBOD comes with a SAS I/O module, which can accommodate four quad port connectors or 16 lanes.
An additional SAS/IO module can be added as an option for increased availability.



SGI Modular InfiniteStorage Platform Specifications

http://www.sgi.com/pdfs/4344.pdf

Servers are hot pluggable, and can be serviced without impacting the rest of the chassis or the other server.
Through an innovative rail design, the chassis can be accessed from the front or rear, enabling drives and other components to be non disruptively replaced.
RAIDs 0, 1, 5, 6 and 10 can be deployed in the same chassis simultaneously for total data protection.
Battery backup is used to allow for cache de-staging for an orderly shutdown in the event of power disruptions.

Connectivity Up to 4 SGI MIS JBODs per SGI MIS Server enclosure
Rack Height 4U
Height 6.94” (176 mm)
Width 16.9” (429.2 mm)
Depth 36” (914.4 mm)
Max weight 250 lbs. (113kg)
Internal Storage
Up to 72 X 3.5” or 2.5” 15mm drives (max 288TB)
Up to 144 x 2.5” 9.5mm drives.
RAID or SAS Controllers
Single server: Up to four 8 ports cards
Dual server: Up to two 8 ports card per server mother board (four per enclosure)
External Storage Attachment Up to 4 SGI MIS JBOD chassis per server enclosure

JBOD modules:
Connectivity Four quad port SAS standard. Eight quad port SAS optional
Internal Storage
Up to 81 X 3.5” drives (max 324TB)
Up to 162 x 2.5” 9.5mm drives



SGI® COPANTM 400M Native MAID Storage

https://web.archive.org/web/20130402000726/http://www.sgi.com/pdfs/4212.pdf

  • up to 2.7PB raw of data in a compact storage footprint.
  • 8x4RU, 8xcanisters ea 4RU, 112 drives/4RU.
  • 2x4RU power, cache and management
  • Up to 6,400 MB/s (23TB/hr) of disk-based throughput
  • idling: power consumption of the storage system by up to 85%.
  • Patented Disk Aerobics® Software
  • Patented Power Managed RAID® Software. Provides full RAID 5 data protection and helps lower energy costs as a maximum of 25% or 50%


Capacity: 224TB to 2688TB per cabinet - 1 shelf = 112x2TB, = 14 drives/module, 8 canisters/shelf
Shelves: 1–8
Connectivity: Up to eight 8-Gbps Fibre Channel ports [later docs: 16x8Gbps FCAL]

Max Spinning Drives at Full Operation up to 50%
Spare Drives 5 per shelf for a maximum of 40
Disk Drive 2TB & 3TB SATA
Dimensions 30” (76.2 cm) w x 48” (121.9 cm)d x 87” (221 cm) h
Clearances Front–40” (101.6 cm), Rear–36” 91.4 cm), Side–0”
Weight Maximum 3,193 lbs. (1,447 kg)

Power Consumption @ Standby (min/max) 426/2,080 watts
Power Consumption @ 25% power (min/max) 649/3,819 watts
Power Consumption @ 50% power (min/max) 940/6,554 watts

Storage tiering software SGI Data Migration Facility (DMF)
D2D backup IBM® TSM®, CommVault® Simpana® Quantum® StorNext®



SGI® COPAN™ 400M Native MAID

https://web.archive.org/web/20130401184152/http://www.sgi.com/products/storage/maid/400M/specifications.html
Specs:
Connectivity Up to sixteen 8-Gbps Fibre Channel ports



MAID Platforms
A New Approach to Data Backup, Recovery and Archiving

https://web.archive.org/web/20130405012939/http://www.sgi.com/products/storage/maid

COPAN products are all based on an Enterprise MAID (Massive Array of Idle Disks) platform, which is ideally suited to cost effectively address the long-term data storage requirements of write-once/read-occasionally (WORO) data.

Solutions

  • Data Archiving
  • Data Protection: Backup & Recovery
  • Storage Tiering
  • Power Efficient Storage

For backup, recovery and archiving of persistent data:

Unprecedented reliability - six times more reliable than traditional spinning disk solutions

  • Massive scalability - from 224 TB to 2688 TB raw capacity
  • High Density - 268 TB per ft.² (2688 TB per .93 m²)
  • Small Footprint - 10 ft.²
  • Energy Efficiency - up to 85% more efficient than traditional, always spinning disk solutions

COPAN technology simplifies your long-term data storage,
drastically lowers your utility costs, and
frees up valuable data center floor space.

  • Lowest Cost Solution
    • Savings in operational costs and capital expenses
  • Smallest Disk-Based Storage Footprint
    • 268 TB per square foot or 2688 TB per .93 m²
  • High Performance
    • Fast Restores up to 23 TB/hour system
  • Breakthrough Energy Efficiency
    • Save up to 85% on power and cooling costs

COPAN Patented Cannister Technology

  • Patented mounting scheme to eliminate "rotational vibration" within a storage shelf
  • Canister technology enables efficient and quick servicing of 14 disk drives
  • Data is striped across canisters with a shelf in 3+1 RAID sets


Storage
Environment Factors
TapeTraditional
Disk-Based Storage
COPAN Systems'
Enterprise MAID
Quick Data RecoveryX
Cost per GBX
Operating ExpenseXX
ScalabilityX
Small FootprintX
Power & Cooling EfficiencyX
Ease of ManagementX
Built for Long-Term Data StorageXX


https://web.archive.org/web/20120619124024/http://www.sgi.com/pdfs/4223.pdf

MAID is designed for Write Once Read Occasionally (WORO) applications.
Six times more reliable than traditional SATA drives

Disaster Recovery Replication Protection:
Three-Tier System Architecture
• Simplifies system management of persistent data
• Scales performance with capacity
• Enables industry-leading, high density, storage capacity in a single footprint
• Enhances drive reliability with unique disk packaging, cooling and vibration management

Patented Canister Technology
• Patented mounting scheme eliminates “rota- tional vibration” within a storage shelf
• Canister technology enables efficient and quick servicing of the 14 disk drives
• Data is striped across canisters with a shelf in 3+1 RAID sets



MONDAY, MAY 16, 2011

EMCWorld: Part2 - Isilon Builds on Last Months Announcements with Support for 3TB Drives and a 15PB FileSystem

http://www.storagetopics.com/2011/05/emcworld-part2-isilon-builds-on-last.html

With list pricing starting at $57,569 ($4.11/GB) for the S200
the value metric is not the
traditional capacity view but
$/IOP and
$/MBs
i.e. $6/IOP and $97/MBs respectively for the S200.
With a starting price of $27,450/Node the X200 comes in at nearly
$13/IOP and
$110/MBs
but has an attractive starting price of
$1.14$/GB,
even more attractive when their 80% utilization claim is factored in.

They doubled their IOP number for
a maximum cluster size of 144 nodes
to 1.4M IOPs and
doubled of their maximum throughput to 85GB/s.
It is not just the power of Intel (Westmere/Nahalem upgrades) that has driven this performance increase but also
the intelligent use of SSD’s.
By supporting HDD’s and SSD’s in the same enclosure and by placing the file metadata on SSD, performance gets a significant boost.
The IOP number has not yet been submitted to SpecFS so the performance number is still “unofficial”.

The latest announcement last week at EMCWorld increased the maximum supported single file system,
single volume to 15PB plus support for 3TB HDD’s on their capacity platform, the NL-108.
Worth noting that this impressive scalability is only for the NL108 configured with 3TB drives.
In comparison the higher performance X200 scalability tops out at 5.2PB.

FeatureS200X200
Form Factor2U2U
Maximum Drives2412
Drive Types2.5” SAS, SSD3.5” SATA, SSD
Maximum Node Capacity14TB24TB
Max Memory96GB48GB
Global Coherent Cache14TB7TB
Max Cluster Size144144
ProtocolsNFS, CIFS, FTP, HTTP, iSCSI (NFS 4.0, Native CIFS and Kerberized NFSv3 supported)
Maximum IO/s1,414,944 IO/s309,312 IO/s
Maximum Throughput84,960 MB/s35,712 MB/s
List Price Starting at;$57,569/node$27,450/node


Front and center in Isilon’s promotional pitches are the advantages of scale-out namely, scalability, efficiency, ease of use and availability and are positioning themselves as the scale-out architecture that is integrated with capabilities that elevate it to enterprise class. This they believe serves them well in both their traditional space as well positioning them to penetrate the commercial HPC, Big Data space.



WEDNESDAY, MAY 18, 2011

EMCWorld; Part 3, The Final installment; VNXe, ATMOS and VMWare

http://www.storagetopics.com/2011/05/emcworld-part-3-final-installment-vnxe.html

VNX Series: As you all are probably well aware the VNX series is the EMC mid-tier, unified storage offering that is in the process of replacing the CLARiiON and Celerra lines.
It was launched back in January and continues to evolve as these announcements suggest:

1. FLASH 1st is the VNX SSD strategy which incorporates FAST, FASTCache and soon to be available server side cache code named project lightening.
On this feature I must admit I became a bit of a convert, see my comments in my earlier blog.
2. A Cloud Tiering Appliance designed to offload cold unstructured data from the VMX to the cloud was introduced.
This device can also operate as a migration tool to siphon data from other devices such as NetApp.
This announcement really resonated with me, more coverage in my earlier blog.
3. A ruggedized version of the SMB version of the VNXe was introduced.
It was mentioned a couple of times in the presentations that EMC have not done well in the federal space.
This is an obvious attempt to help fix that deficiency.
Napolitano also mentioned that 50% of the customers who have purchased VNXe were new to EMC and during the 1st quarter EMC signed 1100 new VNXe partners.
4. SSD support for the VNXe.
Another reinforcement of EMC’s commitment to solid state storage.
5. VAAI support for NFS and block enhancements including thin provisioning.
No surprise here - a deeper integration with VMWare which all storage vendors should be doing.
EMC just happens to have a bit of an advantage.
6. A Google Search Appliance was introduced.
This device enables updated files to be searched sooner and comes in two flavors the 7007 supporting up to 10M files and 9009 supporting up to 50M files.
Clever announcement; in the world of big data findability (my word) is valuable currency.
7. A high density disk enclosure supporting 60, 3.5” SAS, NL-SAS or Flash drives.
GB/RU is one of today’s metrics and this helps EMC’s capacity positioning big time.
8. Doubled bandwidth performance with a high bandwidth option that triples the 6Gb SAS backend ports.
Bandwidth, IOPS & capacity and interesting balancing act particularly when you throw in cost.


ATMOS: I first started to write about Atmos when Hulk was the star of the rumor mill; boy how time fly’s.
Hulk is still there in its evolved instantiation but its role has most certainly moved as a back-up player in the chorus line.
The lead player, ATMOS 2.0 featured in the announcement with the declaration of a significant performance boost.
The claim is an 5x increase in performance with a current ability to handle up to 500M objects per day.
They have also changing their protection scheme they can increase storage efficiency by 65%.
Change is probably the wrong word, they continue to support the multiple copy approach but have added there new object segmentation approach.

Previously data protection was achieved by the creation of multiple copies that were distributed within the Atmos cloud.
The EMC Geo Parity as it is called is similar to the
Cleaversafe approach where rather than storing multiple copies of a complete object
it breaks the object into segments (12) with four segments being parity,
analogous to a RAID group.
These segments are then distributed throughout the cloud with the data protected with a tolerance to a multiple failures.

VMWare: Not much in terms of announcement but some the adoption stats was interesting.

• VM migration (vMotion) has increased from 53% to 86%
• High availability use has increased from 41% to 74%
• Dynamic Resource Scheduling (DRS) has increased from 37% to 64%
• Dynamic migration (storage vMotion) has increased from 27% to 65%
• Fault tolerant use has grown from zero to 23%



IBM Delivers Technology to Help Clients Protect and Retain "Big Data"

http://www-03.ibm.com/press/us/en/pressrelease/34452.wss

Introduces industry-first tape library technology capable of storing nearly 3 exabytes of data -- enough to store almost 3X the mobile data in U.S. in 2010

ARMONK, N.Y., - 09 May 2011: IBM (NYSE: IBM) today announced new tape storage and enhanced archiving, deduplication offerings designed to help clients efficiently store and extract intelligence from massive amounts of data.

At the same time, demand for storage capacity worldwide will continue to grow at a compound annual growth rate of 49.8 percent from 2009-2014, according to IDC (1). Clients require new technologies and ways to capitalize on the growing volume, variety and velocity of information known as "Big Data."

IBM System Storage™ TS3500 Tape Library is enabled by a new, IBM-developed shuttle technology -- a mechanical attachment that connects up to 15 tape libraries to create a single, high capacity library complex at a lower cost. The TS3500 offers 80 percent more capacity than a comparable Oracle tape library and is the highest capacity library in the industry, making it ideal for the world's largest data archives (3).


Thursday, March 20, 2014

Storage: Efficiency measures

In 2020 we can expect bigger disk drives and hence Petabyte stores. Price per bit will come at a premium, it won't track capacity as it does now: larger capacity drives will cost more per unit.

What are the theoretical limits on which Storage solution "efficiency" can be judged?

We're slowly approaching what could be the last factor-10 improvement, to 10Tbits/in², in rotational 2-D magnetic recording technologies of Hard Disk Drives. Jim Gray (~2000) and Mark Kryder (2009) suggested 7TB/platter for 2.5" disk drives by 2020, assuming a 40%/yr capacity growth.

Rosenthal et al (2012) suggest that, like CPU-speed "Moore's Law", disk capacity growth rates have slowed, suggesting 100Tbits/in² may be possible in the far future. They predict 1.8 Tbits/in² commercially available in 2020, vs 0-6-0.7Tb/in² currently.

Three platter 2.5" drives are normally 12.5mm thick, but are in 9.5mm drives available in 2013 (HGST, 1.5TB). Four platter 2.5" drives are 12.5mm or 15mm usually, according to Seagate, with three 667GB/platter in 9.5mm for 2TB total (using 2.3W for read/write).

Slim-line 7mm and 5mm 2.5" drives are on the market. 7mm drives are two platter.

In 2020, the 2.5" disk drive market will differentiate by both thickness (5, 7, 9.5, 12.5,15mm) and number of platters, from 1 to 4. Laptop and ultrabook manufacturers will determine if 7mm replaces 9.5mm as the standard consumer portable form factor, giving them a volume production price advantage.

Per-platter, we can expect 1.5TB-2TB, or total 1TB-6TB in 2.5" drives [vs 5 platter 3.5" drives at 15TB].

Storage system builders will be able to select drive combinations on, not just SSD + HDD:
  • Cost per GB
  • GB per cubic-inch
  • Watts per GB, and
  • spindles per TB, setting maximum IO/sec and streaming IO performance.
Questions:
  • How many drives can fit in a single rack?
    • How much raw capacity?
  • How much power would they use? [and how much cooling]
  • How much does it all weight? [can the floor hold it up?]
  • Time to back it up?
    • Dependant on external ports and interface speeds.
  • Performance:
    • How many IO/sec?
    • Aggregate internal streaming throughput?
    • Normalised multi-media transactions/sec: 1MB Object requests/sec?
    • Scan Time for searching, data mining, disk utilities & RAID rebuild?

Disk drives have 3 different dimensions: WxDxH and 3 different 'faces', WxD, WxH, DxH
For 2.5" drives, approx 70mm x 101mm x 9.5mm
For 3.5" drives, 101.6mm (4 inch) x 146mm (6inch) x 19-26.1mm (nominally "1 inch")

Drives can be placed with any of the 3 faces down and rotated about a vertical axis, giving potentially 6 orientations.
In practice, the thinest cross-section has to face forward, into the airflow, to allow effective cooling.
This gives just 4 orientations: 2 'flat' and 2 'vertical'.

19 inch racks are "mostly standard":
  • 19 inches across the faceplate, posts are each 5/8 inch, fasteners & holes are well defined.
    • But need extra space either side for cabling and airflow, increasing external rack dimension.
  • 17.75 inches internal clearance (450mm). With sliders: 17.25 inches internal. (435mm)
  • 1RU (Rack Unit) = 1 .75 inches high
  • convention is 42RU high = 73.5 inches of usable space
    • Allow for plinth, first usable RU is off the floor
    • Allow for head piece, plate + structural rails,fans and cable organisers on top
  • Depth varies on use:
    • 600mm (24 inch) common in Telecoms
    • 966mm (38 inch) common in IT.
    • Need extra space front and rear for doors, cabling, power strips, ...
  • External dimensions: 30in x 48in x 87in (WxDxH)
    • Notionally, a single rack uses ten square feet (1 square meter) of floor space.
    • side clearance of zero: racks bolt together to stabilise the structure.
    • Front and rear clearance, often 40" and 30" are needed to open doors and load/unload parts.
    • Aisles are needed between rows to allow work and access.
      • In many facilities, need to open two doors at once, 50" minimum.
    • "Hot Aisle": exhaust adjacent rows into the one sealed area with extractor fan.
  • Floor space in server rooms
    • Only around 33%-50% of the available floor space can be used for racks.
    • Racks are best organised in rows parallel to long dimension of room
    • long rooms need breaks in rows, creating cross-aisles
    • Additional clearance is needed around walls of rooms
    • Entrance doors need to be double and handle shipping pallets
    • Extra spare space is needed around doors for staging equipment in, and storing packing waste before removal
    • Dedicated space is needed for "Air handling units" (at least two), power distribution boards and fire control systems. These need clearance for servicing and removal/replacement.
    • In room UPS units need space and cooling (No-break power supplies)
      • lead-acid battery banks of any capacity need to be housing in separate, spark-proof rooms with additional fire control and sprinklers.

Stacking 3.5 inch drives, no allowance for cooling, wiring, power or access:

3.5" drives, at 4 inches wide can be stacked flat, 4 abreast in a rack.
6 drives will fit end-to-end in a 36"-38" cabinet, for 24 drives in a layer.
Alternatively, 17 drives can be stood on their sides across a rack, 4" tall layers.
With 102 drives/layer and 1836 drives/rack.

For nominal 1" thick drives, 72 layers can be stacked, giving 1,752 drives per rack.
With 15TB 3.5" drives, 26PB/rack.
With 4TB 3.5" drives, 7PB/rack.
7200 RPM 3.5" drives consume 8W-10W, or 14kW-16kW per rack.
7200 RPM, 120Hz, drives are capable each of 240 IO/sec, for 400k IO/sec aggregate.
3.5" drives weight ~600grams each, for a load of about 1 ton (or 1,000kg/m²)
15TB 3.5" drives will stream at around 2Gbps, for 3.5Tbps aggregate internal bandwidth.

Stacking 2.5" drives, at 5400 RPM (90Hz)
5mm17,800 drives3,204k IO/sec@ 0.5TB 9PB/rack@ 1.5TB 27PB/rack
7mm12,714 drives2,288k IO/sec
9.5mm9,368 drives1,686k IO/sec@ 1TB 9.5PB/rack@ 3TB 30PB/rack
12.5mm7,120 drives1,281k IO/sec
15mm5,933 drives1,069k IO/sec@ 2TB 12PB/rack@ 6TB 36PB/rack

Power consumption at 1.2W for 9.5mm drives of 8kW, around half the power needed for 3.5" drives.

Aggregate internal bandwidth is higher, even though the per-drive streaming rate is up to 25% lower, 1.5Gbps.
For 9.5mm drives, 14Tbps aggregate internal bandwidth (3TB drives).

5mm drives weigh around 95grams each and 15mm drives 200grams, the same weight ± 15% as 3.5" drives.