There are 3 different measures of area:
- "gross footprint". The room plus all ancillary equipment and spaces.
- "room". The total area of the room. Each rack, with aisles & work-space uses 16-20 sq ft.
- "equipment footprint". The floor area directly under computing equipment. 24"x40", ~7sq ft.
The 2½ inch drive form-factor is (100.5 mm x 69.85 mm x 7-15 mm).
Enterprise drives are typically 12.5 or 15mm thick. Vertically stacked, 20-22 removable 2½ inch drives can be fitted across a rack, taking a 2RU space, with around 15mm of 89mm unused.
2½ inch drive don't fit well in standard 19" server racks (17" wide, by 900-1000mm deep, 1RU = 1.75" tall), especially if you want equal access (eg. from the front) to all drives without disturbing any drives. Communications racks are typically 600mm deep, but not used for equipment in datacentres.
With cabling, electronics and power distribution, a depth of 150mm (6") should be sufficient to house 2½ inch drives. Power supply units take additional space.
Usable space inside a server rack, 17" wide and 1000mm deep, would leave 850mm wasted.
Mounting front and back, would still leave 700mm wasted, but create significant heat removal problems, especially in a "hot aisle" facility.
The problems reduces to physical arrangements that maximise exposed area (long and thin rectangles vs the close-to-square 19" Rack, if dual sided, with a chimney) or maximise surface area and minimise floor space - a circle or cylinder.
The "long-thin rectangle" arrangement was popular in mainframe days, often as an "X" or a central-spine with many "wings". It assumes that no other equipment will be sited within the working clearances needed to open doors and remove equipment.
A cylinder, or pipe, must contain a central chimney to remove waste heat. There is also a requirement to plan cabling for power and data. Power supplies can be in the plinth as the central cooling void can't be blocked mid-height and extraction fan(s) need to be mounted at the top.
For a 17" diameter pipe, 70-72 disks can be mounted around the circumference allowing 75 mm height per row, 20-24 rows high allowing for a plinth and normal heights. This leaves a central void of around 7" to handle the ~8kW of power of ~1550 drives.
A 19" pipe would allow 75-80 disks per row and a 9" central void to handle ~9.5kW of ~2000 drives.
Fully populated unit weight would be 350-450kg.
Perhaps one in 8 disks could be removed allowing a cable tray, a not unreasonable loss of space.
These "pipes" could be sited in normal racks either at the end of row requiring one free rack-space beside them, or in a row, taking 3 rack-spaces.
As a series of freestanding units, they could be mounted in a hexagonal pattern (the closest-packing arrangement for circles) with minimum OH&S clearances around them, which may be 600-750mm.
This provides 4-5 times the density of drives over the current usual 22 shelves of 24 drives (480) per rack, with better heat extraction. At $4-5,000/pa rental per rack-space (or ~$10/drive), it's a useful saving.
With current drive sizes of 600-1000Gb/drive, most organisations would get by with one unit of 1-2Pb.
Update: A semi-circular variant 40inx40in for installation in 3 Rack-widths might work as well. Requires a door to create the chimney space/central void - and it could vent directly into a "hot aisle".
2008: "Cost Model: Dollars per kW plus Dollars per Square Foot of Computer Floor"
2007: "Data center TCO; a comparison of high-density and low-density spaces"
2006: "Total Cost of Ownership Analysis for Data Center Projects"
2006: "Dollars per kW plus Dollars per Square Foot Are a Better Data Center Cost Model than Dollars per Square Foot Alone"