Computer literacy, help and repair

Linux hard drive speed. How to measure hard drive speed


If you have a certain number of hard drives, as well as flash drives (flash drive), there is a need to determine their performance in order to determine for what purposes to use a particular storage device. Checking the read / write speed of hard drives is usually carried out using hdparm.

hdparm- console utility (previously included in the hwtools package) designed to view and adjust the parameters of hard drives with an interface ATA (parallel interface for connecting drives, hard drives and optical drives, to a computer).

Hard disk parameters are set with an emphasis on reliability, even on low-quality equipment, and on most modern motherboards and hard drives, you can noticeably increase performance. IDE subsystems without compromising reliability.

Currently, there are no reliable methods for determining the optimal parameters for devices (except for careful trials and observations) and there is no centralized database (in which information about the observations of experienced users would be collected), then the safest thing is to compare some parameters " default"and, based on them, choosing a device with the most optimal parameters. The easiest way to do this is with hdparm especially since it is included in almost all modern distributions linux.

Although the main purpose hdparm tuning and optimization, it can be used as a simple tool for conducting tests, for this it is enough to execute (hdparm requires admin/root permissions to work):

sudo hdpam -t "device name"

For instance:

sudo hdpam -t /dev/sda

You can find out the disk name by running:

Fdisk -l

It is desirable to carry out the test in the absence of noticeable disk activity. Option " -t" allows you to display the speed of sequential data reading from the disk, without delays caused by the operation of the file system.

The test performed will show the highest data transfer rate for the drive under test. The read/write check takes place at the very beginning of the disk, at its fastest section, so the numbers obtained do not correspond much to the actual speed of the disk. The most realistic result can be obtained by checking the disk at random points, in random order... You can perform such a test using the console utility seeker.

seeker- a console utility that checks the speed of reading / writing hard drives randomly, with access to the disk in random order. In this testing method, the disk head moves quickly from one place to another, reading small pieces of data. The process involves mechanical operations and disk access is much slower than the sequential access test.

Arbitrary verification method used in seeker much closer to the real work of the hard drive and the resulting test results look more plausible. Therefore using seeker it is very important to test the entire disk (/dev/sda), not a separate section (/dev/sda1, /dev/sda2, /dev/sda3 etc.):

sudo seeker "disk name"

The utility is easy to use, starts without additional options, disk testing is carried out within thirty seconds, and for more complete access to the disk, it is better to run the utility with administrator rights (root). In addition to hard drives, seeker you can conduct a comparative test of existing flash drives (e.g. to use the fastest device as a LiveUSB).

Original: Test read/write speed of usb and ssd drives with dd command on Linux
Author: Silver Moon
Publication date: Jul 12, 2014
Translation: N. Romodanov
Transfer date: October 2014

Device speed

The speed of a device is measured in units of how much data it can read or write per unit of time. The dd command is a simple command line tool that can be used to read and write arbitrary blocks of data on a disk and measure the speed at which a data transfer has occurred.

In this article, we will use the dd command to check the read and write speed of usb and ssd devices.

The data transfer speed depends not only on the disk, but also on the interface through which it is connected. For example, the usb 2.0 port has a maximum functional speed limit of 35 MB/s, so even if you plug a high-speed usb 3 flash drive into the usb 2 port, the speed will be limited to a lower value.

The same applies to the SSD device. The SSD device is connected through SATA ports, which have different versions. Sata 2.0 has a maximum theoretical speed limit of 3Gbps, which is about 375MB/s. While SATA 3.0 supports twice the speed.

Test method

Mount the drive and navigate to it from a terminal window. Then use the dd command to first write the file, which consists of fixed-size blocks. Then read the same file using the same block size.

The general syntax of the dd command is as follows

Dd if=path/to/input_file of=/path/to/output_file bs=block_size count=number_of_blocks

When writing to disk, we simply read from /dev/zero, which is the source of an infinite number of bytes. When reading from disk, we read the previously written file and send it to /dev/null, which doesn't really exist. Throughout the process, the dd command monitors and reports the rate at which the transfer occurs.

SSD device

The SSD device we are using is a "Samsung Evo 120GB" SSD drive. This is an entry level ssd device related to budget and also my first SSD drive. It is also one of the highest performing SSDs available on the market.

In this test, the ssd drive is connected to a sata 2.0 port.

Recording speed

Let's record to ssd first

$ dd if=/dev/zero of=./largefile bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 4.82364 s, 223 MB/s

The block size is actually quite large. You can try using a smaller size like 64k or even 4k.

Reading speed

Now, on the contrary, read the same file. But clear the memory cache first to make sure the file is actually being read from disk.

To clear the memory cache, run the following command

$ sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"

Now read the file

$ dd if=./largefile of=/dev/null bs=4k 165118+0 records in 165118+0 records out 676323328 bytes (676 MB) copied, 3.0114 s, 225 MB/s

USB device

In this test, we will measure the read speed of ordinary usb flash drives. The devices connect to standard usb 2 ports. The first device is a sony 4gb usb stick and the second is a strontium 16gb.

First connect the device and mount it so that it is readable. Then, from the command line, navigate to the mounted directory.

Sony 4GB device - recording

In this test, the dd command is used to write 10,000 pieces of data of 8 KB each into a single file on disk.

# dd if=/dev/zero of=./largefile bs=8k count=10000 10000+0 records in 10000+0 records out 81920000 bytes (82 MB) copied, 11.0626 s, 7.4 MB/s

The write speed is about 7.5 MB/s. This is a low figure.

Sony 4GB device - reading

The same file is read to test the read speed. To clear the memory cache, run the following command

$ sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches"

Now read the file with dd command

# dd if=./largefile of=/dev/null bs=8k 8000+0 records in 8000+0 records out 65536000 bytes (66 MB) copied, 2.65218 s, 24.7 MB/s

The read speed is approximately 25 MB/s, which is more or less standard for cheap usb sticks.

USB 2.0 has a theoretical maximum signaling rate of 480 Mbps or 60 Mbps. But due to various limitations, the maximum throughput is limited to approximately 280 Mbps or 35 Mbps. In addition, the actual speed depends on the quality of the flash drive, as well as other factors.

And since the usb device described above was connected to a USB 2.0 port and a read speed of 24.7 MB / s was achieved, which is not very bad. But the write speed is far behind.

Now let's do the same test with a Strontium 16gb stick. Strontium is another brand that makes very cheap usb sticks, but these sticks are reliable.

Write speed for Strontium 16gb device

# dd if=/dev/zero of=./largefile bs=64k count=1000 1000+0 records in 1000+0 records out 65536000 bytes (66 MB) copied, 8.3834 s, 7.8 MB/s

Read speed for Strontium 16gb device

# sudo sh -c "sync && echo 3 > /proc/sys/vm/drop_caches" # dd if=./largefile of=/dev/null bs=8k 8000+0 records in 8000+0 records out 65536000 bytes (66 MB) copied, 2.90366 s, 22.6 MB/s

Data reading speed is slower than Sony device.

To determine the disk write speed, you need to run the following command in the console:

sync; dd if=/dev/zero of=tempfile bs=1M count=1024; sync

The command writes a temporary file 1 MB in size 1024 times and the result of its work will be the output of such data

1024+0 records received 1024+0 records sent copied 1073741824 bytes (1.1 GB), 15.4992 s, 69.3 MB/s

To determine the disk read speed, you need to run the following command in the console:

The temporary file that was generated by the previous command is cached in the buffer, which will increase its reading speed by itself and it will be much higher than the actual reading speed directly from the hard disk itself. In order to get real speed, you must first clear this cache.

To determine the speed of reading from a disk from a buffer, you need to run the following command in the console:

Dd if=tempfile of=/dev/null bs=1M count=1024

Output of the previous command:

1024+0 records received 1024+0 records sent copied 1073741824 bytes (1.1 GB), 15.446 s, 69.5 MB/s

To measure the actual read speed from the disk, clear the cache:

sudo /sbin/sysctl -w vm.drop_caches=3

Command output:

vm.drop_caches = 3

We perform a reading speed test after clearing the cache:

Dd if=tempfile of=/dev/null bs=1M count=1024 1024+0 records received 1024+0 records sent copied 1073741824 bytes (1.1 GB), 16.5786 s, 64.8 MB/s

Performing a read/write speed test on an external drive

To test the speed of any External HDD, USB Flash Drive or other removable media or file system of a remote machine (vps / vds), you need to go to the mount point and execute the above commands.

Or, instead of tempfile, you can of course write the path to the mount point, as follows:

sync; dd if=/dev/zero of=/media/user/USBFlash/tempfile bs=1M count=1024; sync

You should also specify that the above commands use the temporary file tempfile. Don't forget to delete it after the tests are over.

HDD speed test using hdparm utility

hdparm is a Linux utility that allows you to quickly find out the read speed from your hdd.

To start measuring reading speed from your hard drive, you need to run the following command in the console:

sudo hdparm -Tt /dev/sda

Command output in console:

/dev/sda: Timing cached reads: 6630 MB in 2.00 seconds = 3315.66 MB/sec Timing buffered disk reads: 236 MB in 3.02 seconds = 78.17 MB/sec

That's all. Thus, we were able to find out the performance of our hard drive and give a rough estimate of its capabilities.

Now at an accelerated pace is the modernization of disk subsystems in almost any hosting company. Solid state drives have become a significant breakthrough in improving the performance of computer, including server hardware. The fact is that the disk has been a bottleneck for many years, what is called the "weak link" in the performance of any information systems. In other words, all other components - the processor, RAM, system buses, and even the network - have all been much faster and more productive than drives for a long time. SSD gives a performance boost to any device by about 3-5 times. This means any applications will run several times faster, sometimes even tens of times faster.

So, the hoster offers you two tariff lines - SSD and non_SSD. You, of course, take an SSD. But how to make sure that the hoster really issued an SSD? After all, there is no difference for the site to work - everything will work for you on a hosting with any disk. That is, theoretically, the hoster can announce to you that he has servers on fast solid-state drives. But in fact, to sell power on conventional traditional HDDs. And you probably won't even know about it.

After all, SSDs are much more expensive than regular drives. And hosters have serious capacities, they need to store terabytes of data. Can you imagine how much such systems can cost, given that 1 GB of solid state drive is about 10 times more expensive than 1 GB of a regular disk?

What is SSD-boost or flashcache?

In general, there is a hybrid system. When using a bunch of SSD + HDD. At the same time, all data is stored on traditional large disks. There is special software that configures these drives into a special tricky array, where the SSD acts as a cache for any data that is written or read. In such an array, we have a small SSD, say 120 GB, and then a large HDD, 2 Tb. Such a bundle gives a read / write speed like that of an SSD, but a volume like that of an HDD. That's it. At the same time, the hoster can easily tell you that he has everything on the SSD. Honest hosters call it SSD-boost. This does not negatively affect the operation of the sites.

I've tested disk speeds with dozens of different hosts. You will be surprised, but only 1 out of 5 hosts provides an “honest” SSD.

I capture these things with screenshots.

Tests of fake SSDs from some hosts

Hoster #1

Here we see only 30 Mb / s per entry. This is the normal speed for a regular HDD. But the hoster declares it as an SSD.

Hoster №2

Similar picture. But the reading speed is already a little better. Perhaps this is the case with flashcache, but very overloaded. And most likely just a raid array of conventional HDDs. You can collect them in such a way that the reading performance increases by 1.5-2 times.

Hoster №3

Favorite hoster. Shows generally wild results. Not only is the HDD, it is also overloaded with disk accesses.

Hoster №4

It's actually a fun story here. Did a client audit of the server, there were complaints about the brakes. Let me check the disk.

Here is such a picture. I write to the client this way and that - the hoster blatantly deceives you. The client runs to support - it turns out really. The client was “forgotten” to turn on the SSD when switching from one tariff to another, you understand? Switch, test again and see how a real SSD appeared.

Real SSD tests

Now, in order to understand the difference, I will show you the screenshots of the tests of a really SSD.

This is how it looks. Recording speed over 100 Mb/s should be. This is the minimum for an SSD. This is a test from my work laptop on which I am currently writing this post. It has the cheapest 120 Gb SSD. As you can see, its speed is 4-5 times faster than that of a traditional disk.

And here is a hoster test that provides a real SSD.

This is definitely a real SSD. This is how it should be. Perhaps a boost is configured, but it's still an SSD and you can live with this hoster.

How to do a disk speed test with a host?

I use dd for this. It is in any linux. But it should be handled with care, otherwise there is a risk of spoiling the entire server in general, all the data on it. Because this utility writes raw data to any device or file you specify.

So, for the write test, you should take a stream of zeros from the special device /dev/zero and send it to a file on the drive under test. Any arbitrary file. For example, in the temporary files folder /tmp/test.img

Dd if=/dev/zero of=/tmp/test.img bs=1M count=1024 oflag=dsync

This command will create a 1 GB file and display the write speed.

You can immediately check the reading speed, only here the if option should point to the created file, and of somewhere into the void. In Linux, there is such a device /dev/null , we will direct it to it:

Dd if=/tmp/test.img of=/dev/null bs=1M count=1024

But before that, you need to reset the disk cache, otherwise the file will be read in a second, and you will see the read speed in Gb / s. This is done with the following command:

Sysctl vm.drop_caches=3

After that, we conduct a reading test with the second team.

Well, at the end, you need to delete the test file so that it does not take up space:

Rm -f /tmp/test.img

This will only work on a dedicated server or VPS. And, not on every VPS. Since they also differ in virtualization technology. Many hosters do not provide full-fledged virtualization (KVM, XEN) but containers (openVZ). There is no access to the kernel parameters, which means that it will not be possible to reset the cache. You will have to read and write to different files, or wait a few hours before the read test until the disk cache is overwritten with other data. It is also quite difficult to test the speed on a shared hosting, since you do not have root access there. But the dd utility is usually available to any system user, so you can check on it with SSH access.

It requires reading the manual (man fio) but it will give you accurate results. Note that for any accuracy, you need to specify exactly what you want to measure. Some examples:

Sequential READ speed with big blocks

Fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync= 10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting

Sequential WRITE speed with big blocks(this should be near the number you see in the specifications for your drive):

Fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=write --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync= 10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting

Random 4K read QD1(this is the number that really matters for real world performance unless you know better for sure):

Fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync= 1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting

Mixed random 4K read and write QD1 with sync(this is worst case number you should ever expect from your drive, usually less than 1% of the numbers listed in the spec sheet):

Fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randrw --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync= 1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting

Increase the --size argument to increase the file size. Using bigger files may reduce the numbers you get depending on drive technology and firmware. Small files will give "too good" results for rotational media because the read head does not need to move that much. If your device is near empty, using file big enough to almost fill the drive will get you the worst case behavior for each test. In case of SSD, the file size does not matter that much.

However, note that for some storage media the size of the file is not as important as total bytes written during short time period. For example, some SSDs may have significantly faster performance with pre-erased blocks or it might have small SLC flash area that"s used as write cache and the performance changes once SLC cache is full. As an another example, Seagate SMR HDDs have about 20 GB PMR cache area that has pretty high performance but once it gets full, writing directly to SMR area may cut the performance to 10% from the original. as possible. Of course, this all depends on your workload: if your write access is bursty with longish delays that allow the device to clean the internal cache, shorter test sequences will reflect your real world performance better. IO, you need to increase both --io_size and --runtime parameters.Note that some media (eg most flash devices) will get extra wear from such testing.In my opinion, if any device is poor enough not to handle this kind of testing, it should not be used to hold any valueable data in any case.

In addition, some high quality SSD devices may have even more intelligent wear leveling algorithms where internal SLC cache has enough smarts to replace data in place that is being re-written during the test if it hits the same address space (that is, test file is smaller than total SLC cache). For such devices, the file size starts to matter again. If you need your actual workload it "s best to test with file sizes that you"ll actually see in real life. Otherwise your numbers may look too good.

Note that fio will create the required temporary file on first run. It will be filled with random data to avoid getting too good numbers from devices that cheat by compressing the data before writing it to permanent storage. The temporary file will be called fio-tempfile.dat in above examples and stored in current working directory. So you should first change to directory that is mounted on the device you want to test.

If you have a good SSD and want to see even higher numbers, increase --numjobs above. That defines the concurrency for the reads and writes. The above examples all have numjobs set to 1 so the test is about single threaded process reading and writing (possibly with a queue set with iodepth). High end SSDs (eg Intel Optane) should get high numbers even without increasing numjobs a lot (eg 4 should be enough to get the highest spec numbers) but some "Enterprise" SSDs require going to 32 - 128 to get the spec numbers because the internal latency of those devices is higher but the overall throughput is insane.

Similar posts