123ArticleOnline Logo
Welcome to 123ArticleOnline.com!
ALL >> Computers >> View Article

All About Hard Disks

Profile Picture
By Author: All about Hard Disks
Total Articles: 1
Comment this article
Facebook ShareTwitter ShareGoogle+ ShareTwitter Share

The storage capacity of a hard disk is calculated from the size of a sector (512 bytes, 2048 bytes or 4096 bytes) multiplied by the number of available sectors. The size of the first hard disk is specified in megabytes, from about 1997 gigabytes, and now there are drives in the terabyte range.
Was the method of storing the data of the first plates still visible from the outside (sectors per track, number of tracks, number of heads, MFM or RLL modulation), so this changed with the introduction of IDE drives in the early 1990s years. Increasingly less and less is to see how the data is stored internally, the response of the plate via an interface that hides the internals to the outside.
The largest free in October 2010, commercially available external USB hard drive FreeAgent Desk GoFlex 3000GB Seagate. The time evolution of the maximum disk capacity shows an almost exponential decay, similar to the evolution of computing power to the Moore's Law. The capacity has doubled at slightly lower prices roughly every 16 months.
Sizes:
standard 3.5 "HDD
Form Factor 2.5 "(left) and 5.25" (full height, right)
...
... 1 GB IBM Microdrive (1) Compatible with CompactFlash Type II
The physical size of hard drives is traditionally given in inches and is not an exact size specification, but a form factor. For example, 3.5 "hard disk drives are typically about 101.6 mm wide, which corresponds to 4 inches.
In the course of technical development has been constantly adapted to smaller sizes because they are less vulnerable to shocks and use less power. The data density contrast evolves so fast that the reduced area is only a short brake.
The first disk drive IBM 350 from 1956 had a size of 24 ". Mid-1970 models came with a size of 8 "which is also quite fast by the much more manageable, especially lighter 5.25" were replaced hard disk drives. In between there were sizes of 14 "and 9".
5.25 "disk drives were introduced by Seagate in 1980, since 1997, this species extinct. Some SCSI server drives as well as low cost ATA drive from Quantum BigFoot were the last representatives of this species. We distinguish devices with full-height (3.5 "or about 88 mm) occupying two slots, and half-height (1.75" or about 44 mm). There are also models with even lower height, then the aforementioned BigFoot such as the 4-GB version has a height of only 0.75 "(about 19 mm). The width is 146 mm, the depth is variable and should not be much above 200 mm.
3.5 "hard disk drives were introduced around 1990 and are currently (2010) standard in desktop computers. In the server area, they are now replaced by 2.5-inch models, which can hold more data per square and even at higher speeds consume significantly less power. Most drives have 1 "or 25 mm height. In the server area, there are drives to 1.8 "height (1.8" or about 44 mm). The width is 100 mm, the depth is variable and should not be much above 150 mm. Meanwhile, the future of such hard disk drives for the desktop market is questionable. [2] Toshiba has announced the MK3233GSG a 1.8-inch hard disk drive that holds 320 GB is currently the largest storage capacity in the order.
2.5 "sizes available since its introduction in notebook or use special computers. The traditional height was 0.5 "(12.7 mm), it is now with 0.375" (9.5 mm) and 0.25 "(6.35 mm) flat disk drives - and even notebook computers that require these flat versions . The width is 68 mm, depth 100 mm. The connection is modified from the larger types, such as ATA in a 44-pin connector is used at the same time supplies the operating voltage of +5 volts (pin 1 is usually on the side of the jumper). Unlike the larger drives, these drives come without 12-volt operating voltage (in addition to the 5-volt) from.
Since 2006, Seagate and other manufacturers also hereinafter also 2.5 "hard disk drives for use in servers, it uses less electricity, save space and to increase reliability. Other manufacturers include Toshiba, Hitachi and Fujitsu. Since April 2008 by Western Digital Velociraptor with a 2.5 "hard disk drive (but with 15 mm height) with 3.5" mounting frame marketed as a desktop hard drive
Sizes are used since 2003 in Sub-notebooks, and various industrial applications. Similarly, in large MP3 players. Hitachi announced the end of 2007 not to manufacture 1.8 "hard drives anymore, since flash memory repressing its size.
Smaller sizes play a minor role. One of the few exceptions are the so-called micro drives, hard disk drives with a size of 1 ", the relatively large and cheap CompactFlash Type II storage units for example, enabled digital cameras. Meanwhile, they have been almost completely replaced by cheaper flash memory, which is also much more robust, faster, lighter, quieter and more energy efficient. In addition, it was in 2005 temporarily by the Toshiba hard disk drives with a size of just 0.85 "and a capacity of 4 GB. These models were only for special applications, including used in MP3 players, thought and only available in limited quantities.
HYSICAL structure of the unit:
Sketch of a hard disk
A hard disk consists of the following components:
* One or more rotatably mounted disks (English: Platter, plural: Platters)
* are an axis, also called spindle on which the wheels mounted one above the other,
* An electric motor to drive the disk(s)
* Moving read / write (Heads)
* Each a camp for Platter (mostly hydrodynamic bearing), and for the read / write (including magnetic bearings),
* A drive for the read / write,
* The control electronics for motor and head control,
* A DSP for administration, operation of the interfaces that control the read / write heads. Modulation and demodulation of signals the read / write heads built here by special hardware and is not carried out directly by the DSP. Required processing power of the demodulation is in the range 107 MIPS.
* DDR-RAM for the operating system, programs, temporary data and disk cache. Common currently are 2 to 64 MiB.
* The interface to access the hard disk from the outside and
* A stable housing (see separate section below).
Technical design and material of the discs
Open hard drive in the 1980s.Individual parts of a hard disk:
The discs are mostly composed of surface-treated aluminum alloys, some also of glass. You must be strong and have low electrical conductivity to keep the size of the eddy currents low. Since the magnetizable layer is to be particularly thin, the material must, however, the discs themselves do not have magnetic properties and is used only as a carrier of the magnetic layer. On the discs an iron oxide or cobalt layer of about one micron thickness is applied. Today's hard disks are produced by sputtering of so-called "high density storage media" (ger material for high density) such as CoCrPt. The magnetic layer is additionally provided with a layer of diamond-like carbon (engl. "carbon overcoat") in order to avoid mechanical damage. The future reduction of the magnetic bits requires both the study of "ultra high density storage media" as well as alternative approaches, as it slowly approaches the superparamagnetic limit. In addition, an increase in data density by a better substrate and by optimizing the writing process was achieved.
In desktop hard drives from 2000 to 2002 by IBM (75GXP/40GV Deskstar DTLA-30xxxx, Deskstar 60GXP/120GXP IC35Lxxxx) glass as a material for the discs was used. Newer models of the hard disk drive business from IBM (acquired in 2003 by Hitachi) to use with the exception of server hard drives back aluminum. located in the disk housing one or more superimposed rotating disks. Have been built hard drives with up to twelve discs, normally present one to four. Energy consumption and noise levels rise within a hard drive family with the number of disks. Usual, to use all surfaces of the plates (n plates, 2n read-write heads) is. Some platen (eg, 320-GB disks at 250 GB / disk), however, come with an odd number of read / write heads (here: 3) and do not use a surface.
With replacement of the Longitudinal Magnetic Recording by Perpendicular Magnetic Recording (PMR), a well-known since the 1970s, but have no knowledge of storage principle, it was achieved by intensive research since 2000, increasing the data density further. The first disk storage with this technology came in 2005 from Hitachi: a 1.8 "hard drive with 60 GB hard drives all developed since 2008 include this technology (from 200 GB / disk at 3.5.").
Axis storage and speeds:
In workstations or home PCs used hard drives - at the moment, for the most part drives with ATA, SATA, SCSI or SAS interface - rotate at speeds of 5400-10000 rpm min-1. Before the time of ATA hard drives and in high-performance computers and servers have been mainly hard drives with the technically superior SCSI, FC or SAS interfaces used to reach min, which now usually 10,000 or 15,000 revolutions-1. In the 2.5-inch hard drives that come mainly in notebook computers for use, the spindle speeds are in the range 4200-7200 min-1. The axes of the discs of earlier disks (to 2000) were ball-bearings, in recent times, mainly hydrodynamic bearings (English "fluid dynamic bearing" - FDB) is used. These are characterized by a longer life, lower noise and lower manufacturing cost.
The read-write head unit:
Head of a hard drive carrier
Write a 2.5 "HDD
The write head (magnetic head) of the writing finger, basically a tiny electric magnet, magnetized tiny areas of the disc surface varies and overwrites the data on the hard disk. Due to a floating air cushion generated by the friction of the air at the rotating disk surface, the read / write (see ground effect). The flying height was 2006 range from about 20 nm Based on this small distance can not contain the air inside the Enclosure no impurities. In the latest hard drives with perpendicular recording technology, this gap shrinks to 10 nm, the ground effect in this context is very useful for maintaining proper altitude of the write head over the rotating disk. The production of hard disks are so like that of semiconductors in clean rooms.
The data were read out to about 1994 by induction of the magnetic field of the magnetized area in the coil of the write head. Over the years, but due to the increasing data density, the areas on which individual bits are stored in smaller and smaller. To read this data, smaller and more sensitive read heads were needed. These were developed after 1994: MR read heads and a few years later, GMR heads (giant magnetoresistance). The GMR read head is an application of spintronics.
In the early days of the read / write disks were driven by stepper motors, as the trace distances were large (see also actor). In current common writing poetry with Tauchspulsysteme provide attitude control for positioning. In the Hitachi Deskstar 7K500, the 5.3 track density tracks / um, the bit density 34.3 bits / micron. They are 182 square bits / micron.
To protect the disc surfaces prior to installation of the read / write heads (the so-called head crash) take this even before decreases by turning off the hard drive the rotation speed significantly, in the landing zone (engl. "landing zone"), in which they are fixed. This parking increases the impact resistance of the hard drives for transport and conversion. The parking position can be outside of the discs are or inside the panels. Here is the write up on a predefined area of the disk that contains no data. The surface of this area has been treated to prevent sticking of the head and thereby facilitate the subsequent recovery of the hard disk. The fixation is done, for example, via a magnet that holds the read head.
Parking position of the read head outside of the plate stack
In older hard drives, the read / write heads have been moved out of almost all models from the plate stack. Later (1990s, 2000s) increasingly the park position was preferred in the interior. 2008 occur in both versions. For notebook panels, the parking position outside of the plate stack provides additional protection against damage to the surface of the disks during transport (vibration) of the hard disk.
be parked at an older drive's heads had explicitly before switching off by command from the operating system. The heads of modern hard disks can be parked explicitly described as the automatic parking mechanism can lead to failure of the supply voltage to increased wear. [20] The park today command is automatically deducted when you shut down the system by the device driver.
With modern laptops, an acceleration sensor for parking the hard drive still finger during a possible free fall, so as to limit the damage in the collapse of a computer.
Hard drive enclosure:
The case of a hard disk is very massive. It is usually made up of a aluminum alloy casting and fitted with a stainless steel cover.
There is dust, but not complete airtight: A filter provided with a small opening can cause different changes in temperature or air pressure changes of air in or escape, so as to balance the pressure differences. This opening - see figure - must not be closed. Since the air pressure in the housing with increasing height above sea level decreases, the operation but a minimum pressure is required, these disks can only up to a certain maximum amount to be operated. This is usually indicated in the accompanying data sheet. The air is needed to prevent direct contact between the read head and hard disk, see above, the read / write head assembly.
On newer drives, the filter is replaced by an elastic membrane that can customize the system by bulging in one direction or another to changing pressure conditions.
When a hard drive in a standard, contaminated air open, ensuring even the smallest Staub-/Rauchpartikel, fingerprints, etc. for most irreparable damage to the disk surface and the tape heads.
Store and read data:
Magnetic disks organize their data, unlike RAM (which she organized in bytes or in small groups of 2 to 8 bytes) in blocks or sectors (512 bytes, 2048 or 4096 bytes). It can only be read and written all sectors on the part of the hardware.
The reading of blocks by specifying the linear sector number. The drive knows where to find this block, and reads or writes it on demand.
Blocks when writing:
* First provided with error correction information,
* Subject to a modulation: early GCR, MFM, RLL were common, these days have PRML and recently replaced EPRML this,
* Then the read / write head support is moved to the vicinity of the track to be described,
* Of the surface that carries the information, assigned read / write head reads the track signal and performs the fine positioning. These include, first, to find the right track, on the other hand the track meet exactly in the middle.
* If the read / write head stable on the track and the correct sector is under the read / write head, the modulation block is written.
* In cases of suspected false position, the write operation to stop immediately, so that no neighboring tracks are destroyed (some beyond repair).
When reading these steps are reversed:
* Read-write head support in the proximity of the wheel drive, which should be read.
* The surface that carries the information, assigned read / write head reads the track signal and performs the fine positioning.
* Is now (read or a little longer) the track for so long, was found successfully to the desired sector.
* This process found sectors are first demodulated, then subjected to forward error correction.
* Usually are usually far more sectors than the requested sector is read. These usually end up in the disk cache (if not already present), since the probability is high that they are still needed in the near future.
* Was a sector hard to read (need several reading, error correction, pointing to a number of correctable errors), it is usually remaps, ie stored in another location.
* Was not the sector is readable, called a CRC error.
Physical architecture of the discs:
The magnetization of the coating of the discs is the actual information carrier. It is applied from the read / write head on a circular, concentric tracks, while the disk rotates. A disk typically contains several thousand such tracks, usually on both sides. The totality of all the same, that is located above the other, (surface) tracks of the individual plates called the cylinder. Each track is divided into small logical units, called blocks. A typical block contains 512 bytes of user data (user data). Each block in this case has control information (checksums), which is ensured by that the information was correctly written or read. The totality of all the blocks that have the same angular coordinates on the plates, called sectors. The construction of a special hard drive type, that is, the number of tracks, surfaces and sectors, is also known as a hard disk geometry. The term sector but is often used incorrectly as a synonym for block.
Since some operating systems to their limits pushed early, as the numbering of the blocks with increasing hard disk capacity, the word limit (16 bits) exceeded, led to a cluster. These are groups of a fixed number of blocks (eg, 32), which are logically adjacent physically. The operating system no longer speaks to individual blocks, but used on its (higher) level, this cluster is the smallest allocation unit. Only the hardware driver level, this relationship is dissolved.
On modern hard disks is usually the true geometry, ie the number of sectors, heads and cylinders, which are managed by the disk controller, to the outside (ie the computer or the hard disk drivers) are not visible. The computer uses the hard drive is working, then with a virtual hard drive, which has completely different geometric data. This explains why, for example a hard disk, the real has only four heads, is seen by the computer with 255 heads. One reason for this virtual concept is that you wanted to overcome limitations of PC-compatible hardware. Furthermore, the hard disk controller can thereby hide bad blocks, and then display a block from a reserve area. For the computer it always looks as if all the blocks without defects and usable. It is believed that this reserve area accounts for about 10-20% of the specified hard disk space. By special firmware versions can use this reserve area, which then reduces the life of the logical disk (or data security). Today, conventional hard disks continue to share the plates in zones, a zone containing several tracks, each with the same number of blocks.
Speed:
Hard drives are among the slowest parts of the PC hardware. Therefore, the subject of speed is an essential factor. The main technical parameters are the continuous transfer rate (sustained data rate) and the average access time ([data] access time). The values can be seen from the data sheets of the manufacturers.
The continuous transfer rate is the amount of data that transfers the disk reading consecutive sectors on average per second. The values of writing are mostly similar and are therefore not usually indicated.
Both writing and reading must be from access to a particular block of the write head of the panel moved to the desired track and are then waits until the rotation of the disk, this block is passed under the head. This mechanically induced delays are as of 2009 at about 60-20 ms, which is in accordance with standards of other computer hardware an eternity. Hence the extremely high latency of disk is compared to RAM, which are still considered at the level of software development and the algorithms must.
The access time consists of several components:
* The lane change time (seek time),
* The latency time (latency) and
* The command-latency (controller overhead).
The lane-change time is determined by the strength of the drive for the read / write (Servo). Depending on which route to cover the chin, there are different times. is given usually only the mean changing from a random track to another random (weighted by the number of blocks on the tracks). Since about 2003, most desktop drives possible, access times for the benefit of noise to prolong artificially.
The latency time is a direct consequence of the rotation speed. On average it takes a half turn, passes to a particular sector under the head. This results in the fixed context:
The command latency is the time spent by the disk controller in order to interpret the command and coordinate the necessary actions. That time is now negligible.
The strength of the technical parameters of the system performance are limited. Therefore, another key figure in the professional field, namely input / output operations using Per Second (IOPS). This is now dominated largely by the access time. And from the definition is also clear that provide two half as large plates at the same speed same amount of data at twice the IOPS number. This is one reason why server boards are typically not as great as desktop drives.
The evolution of the disk access time can keep up with the other PC components is not as CPU, RAM or graphics card more step, so it has become a bottleneck. In order to achieve a high power, you need a hard drive so, if possible, still large amounts of data in consecutive blocks read or write, because it will not position the read / write from scratch.
This will be achieved, inter alia, that as many operations performed in RAM and on disk, the positioning of the data are matched to the access pattern. This is mainly a large memory cache of the computer that is provided by all modern operating systems. In addition, the hard disk electronics a small cache (as of 2010, some MB), which is mainly used for decoupling of the interface transfer rate of the steady transfer rate of the write head.
Besides the use of a cache, there are other software strategies to enhance performance. They are especially effective in multitasking systems, where the hard drive system is confronted with several or many read and write requests. It is then usually more efficient to bring these requirements into a meaningful new order. It is operated by a disk scheduler. The simplest principle is pursuing the same strategy as an elevator control: The traces are first approached in one direction and processed according to the requirements, for example, monotonically increasing trace numbers. Only when all these have been processed, will reverse the movement and then work towards monotonically decreasing track numbers, etc.
Until about 1990, had hard drives usually so little cache (0.5 up to 8 Kbytes) that they could not complete trace cache (then 8.5 KByte or 13 KByte). Therefore, the data access had to be slowed down or optimized by interleaving. This was not necessary for sheets with high-quality SCSI or ESDI controller or with the then emerging IDE drives.
Since about 2008, a further technology responsible for the permanent storage of data in the PC catchment: the SSD (Solid State Drive). This has significantly improved access times to data.
Partitions:
From the perspective of the operating system by partitioning hard drives can be divided into several areas. These are not real drives, but only by the operating system represented as such. One can imagine them as virtual disks, which are represented by the disk driver in the operating system as compared to separate devices. The hard drive itself "knows" these partitions, it is a matter for the parent OS.
Each partition is formatted by the operating system, usually with a file system. Under certain circumstances, depending on the used file system, multiple blocks together into clusters, which are then the smallest logical unit of data that are written to disk. The file system ensures that data can be stored as files on the disk. A content directory in the file system ensures that files are found and can be stored hierarchically organized. The file system driver manages to occupied, available, and defect clusters. An example of a file system is the (MS-DOS and Windows 9x only supported) FAT file system.
Noise reduction:
To reduce the volume of the drives to access data that support more suitable for desktop use ATA and SATA hard drives, "Automatic Acoustic Management (AAM). If the drive is operated in a silent mode, the read / write heads are less accelerated, so that the traffic volume. The noise of the plate stack and the data transfer rate is not affected by this, but will increase the access time.
Interfaces, bus system and jumpering.
Originally (to 1990/91) there was what is now understood as an interface for hard disk drives in consumer-not on the hard disk. For this, a controller in the form of an ISA expansion card was necessary. This controller via an interface board said the names of ST506 (with the modulation standards MFM, RLL and ARLL) and ESDI. The capacity of the plate was dependent on the controller, the same was true for the data reliability. A 20 MB-MFM-P latte had a RLL controller, 30 MB, but was possibly not particularly reliable.
Only SCSI disks, and the emerging ESDI drives IDE drives made an end to this tradition from the early days of magnetic disk-based storage technology.
As an interface to the computer's hard drive is now used in the desktop segment mainly the serial (SATA or S-ATA) interface. Until recently, here is the parallel ATA (or IDE, EIDE) interface standard. For servers and workstations are SCSI (parallel), Fibre Channel, SAS usual (both series) and the lower power range also SATA.
The Boards have long provided most of the time with two ATA interfaces, now - in part also, in part, instead - with up to 10 SATA ports.
A similar change is seen in server and storage subsystems. In addition to the more commonly used SCSI hard disks are used more and more serial types, such as Fibre Channel or SAS drives. One of the best solution to protect hard drives is to use one of the Best Antivirus solution, there are Free Antivirus programs and shareware version.
A basic problem in parallel transmission is that it is increasingly difficult with increasing speed, to control various durations of the individual bits of the cable and signal reflections. Therefore, the parallel ports now come to its limits, this limitation is eliminated by serial transmission techniques, with which higher rates are possible.
ATA (IDE):
Disk Configuration Jumper
In an ATA hard disk is determined by jumpers, whether it is the drive to address 0 or 1 of the ATA interface (Device 0 or 1, often with master or slave called). Some models also allow a limitation of the operating system or BIOS reported capacity of the drive, making the hard drive in case of incompatibilities yet (under gifting of unreported disk space) can be put into operation.
By setting the ATA bus address, two hard drives to an ATA interface on the motherboard are connected.

Total Views: 114Word Count: 4593See All articles From Author

Add Comment

Computers Articles

1. Scrape Out-of-stock Items On Jiomart, Scraping Out-of-stock Product Data From Jiomart, Web Scraping Jiomart Data For Inventory Status,
Author: Den Rediant

2. Web Scraping Api For Blinkit, Swiggy Instamart, And Zepto
Author: FoodDataScrape

3. Scrape Top Ordered Food From Deliveroo On Ramadan 2025
Author: i web data scraping

4. Why Are More Uae Companies Outsourcing Recruitment In 2025?
Author: raj jinna

5. Ai Face Recognition Singapore – S$1/month Biometric Access
Author: Jaham2306

6. Gps Tracking Device Singapore – Real‑time Tracking @ S$1/month
Author: Mjaha2306

7. Empowering Businesses With Managed It Services Firms
Author: Trinity Diaz

8. Best Ways To Reach Roadrunner Email Support (phone, Chat, Email & more)
Author: davidcruzz

9. Extract Instacart Product Flavors & Ingredient Insights Data
Author: FoodDataData

10. Leveraging The Wine Product Dataset From Total Wine
Author: i web data scraping

11. Ingredient & Flavors Insights Scraping From Blinkit App
Author: FoodDataScrape

12. Top 10 Seo Companies In Delhi - April 2025
Author: Flexsin

13. Business Advantages Of Using Hyperlocal Pricing Data Intelligence
Author: FoodDataScrape

14. Empowering Businesses With It Service Management Solutions
Author: Trinity Diaz

15. Sales Strategy Through The Vivino Liquor Product Details Dataset
Author: FoodDataScrape

Login To Account
Login Email:
Password:
Forgot Password?
New User?
Sign Up Newsletter
Email Address: