Single Client Benchmarks

Windows Clients

The RAID rebuild functionality was where our great experience with the N2310 ended. Our evaluation of NAS units usually starts with benchmarking a CIFS share on the unit using Intel NASPT and our custom robocopy tests using a single client. We started evaluation with firmware version 691, which resulted in the NASPT evaluation breaking midway with a message that the NAS unit stopped responding to requests. This would happen after 2 or 3 passes of the five in a batch run. Eventually, moving to version 743 solved this issue. The results from our NASPT evaluation of the CIFS share are provided in the graph below.

HD Video Playback - CIFS

2x HD Playback - CIFS

4x HD Playback - CIFS

HD Video Record - CIFS

HD Playback and Record - CIFS

Content Creation - CIFS

Office Productivity - CIFS

File Copy to NAS - CIFS

File Copy from NAS - CIFS

Dir Copy to NAS - CIFS

Dir Copy from NAS - CIFS

Photo Album - CIFS

robocopy (Write to NAS) - CIFS

robocopy (Read from NAS) - CIFS

Linux Clients

From the perspective of Linux clients, we tried testing out both CIFS and NFS support using a CentOS 6.2 VM. In order to standardize the testing across multiple NAS units, we mount the CIFS and NFS shares during startup with the following /etc/fstab entries.

//<NAS_IP>/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER cifs rw,username=guest,password= 0 0

<NAS_IP>:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER nfs rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2, sec=sys,mountaddr <NAS_IP>,mountvers=3,mountproto=udp,local_lock=none,addr=<NAS_IP> 0 0

The following IOZone command was used to benchmark the shares:

IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv

IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv

IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below. The Linux CIFS test took multiple tries to complete, as we often found that rapid mounting and dismounting using the -U iozone flag would result in a smbd process hang on the N2310 (as shown in the picture below). Eventually, one full pass of the iozone test was completed on a CIFS share.

Thecus said they were able to reproduce the problem (along with the NFS issue cited below), but said they were also able to see the issue with NAS units from competitors (though, personally, I have never seen the problem in my setup while evaluating other NAS units).

On the NFS side of things, the N2310 supports both NFS v3 and v4. Unfortunately, while benchmarking with the -U flag, we were never able to get the test to complete. While we did get file sizes up to 1 GB to complete a couple of times, the 2 GB tests would fail with read data mismatch invariably. In any case, with the limited results of our testing, we have the graphs below.

Readers interested in the IOZone CSV output (including the truncated NFS version) can find them here (NFS) and here (CIFS).

A summary of the bandwidth numbers for various tests averaged across all file and record sizes is provided in the table below. As noted previously, some of these numbers are skewed by caching effects. A reference to the actual CSV outputs linked above make the entries affected by this effect obvious.

Thecus N2310 - Linux Client Performance (MBps)
IOZone Test CIFS NFS
Init Write 14 32
Re-Write 13 30
Read 31 89
Re-Read 31 90
Random Read 16 37
Random Write 11 22
Backward Read 17 31
Record Re-Write 166* 408*
Stride Read 27 70
File Write 13 31
File Re-Write 13 30
File Read 22 67
File Re-Read 21 68
* Performance Number Skewed by Caching Effect

 

Hardware Platform and Usage Impressions Multi-Client Performance - CIFS
POST A COMMENT

39 Comments

View All Comments

  • PEJUman - Monday, July 7, 2014 - link

    Thanks! 'Convenience' exactly what I thought, just wanting to make sure I am not crazy/stupid :D

    I used to have an FTP server with public access on the router, but have since moved to Dropbox-Sky/OneDrive-Gdrive combo. Small files and slow upload speed drives me into these guys:
    - Dropbox with truecrypt files for sesitive files (Dropbox supports segmented uploads, i.e. only changed portion of a large file is uploaded).
    - Grandfathered skydrive for huge files.
    - Gdrive for sharing with people.

    For now these guys works very well for my cloud access needs... I use symlinks to change the windows default mapping to the above folders, and it's fire and forget to the whole family :D.
    Reply
  • Phasenoise - Monday, July 7, 2014 - link

    The answer is simple: opportunity cost. It's not a question of cash alone.

    Why don't I just mow my lawn? Why don't I just clean the house myself?

    As an "Elder Geek" - I can make a lot of money in that time I'd spend setting up a file server. It's a solved problem with a commodity solution.
    Reply
  • PEJUman - Monday, July 7, 2014 - link

    I haven't personally looked at the recovery reliability on a failed drive on these things recently. but few years back it was quite a nightmare.

    Botched rebuild/recovery on one of these things could really wipe any opportunity cost you might have saved initially. Go ahead, ask me how I know.... :P

    I make my money by geeking out on things that burn dead dinosaurs, when I geek out on electrons and silicons, it's meant as a hobby :D
    Reply
  • Beany2013 - Tuesday, July 8, 2014 - link

    Didn't see the further posts :-$

    I'm pretty sure the Syno RAID1 and RAID10-esque solutions are literally just EXT3/4 mdadm with some performance tweeks in the OS, so file recoverability in the event of a crash = Ubuntu bootable USB drive a machine with a few spare SATA slots:
    http://www.synology.com/en-uk/support/faq/579

    Others - not so sure. I'm not a fan of 'flex RAID' type affairs either.
    Reply
  • wintermute000 - Tuesday, July 8, 2014 - link

    I've had a series of QNAPs and they've all recovered a RAID-5 failure at least once, zero issues, just took ages to rebuild compared to a 'real' CPU in a 'real' server.

    Mind you QNAP / Synology is the gold standard for these home/SMB appliances. I wouldn't trust a Thecus myself.
    Reply
  • jabber - Monday, July 7, 2014 - link

    Maybe they just don't have the time? It's just storage. Storage is a commodity item. How amazing or convoluted to do you need to get to have a place to dump some data? Reply
  • kmmatney - Monday, July 7, 2014 - link

    I also have a home-build whs server with 5 storage drives. It generally works fine, but can also annoying at times, especially with windows rebooting itself after updates, and lots of general Windows "issues" that have driven me nuts over the last few years. I would love to have something simple like this, but the expandability isn't there . You can't beat having 6-8 SATA ports on a motherboard, and then the ability to easily expand. I would really like to be able to buy something like an 8-bay NAS, and expand as I need it, but the price of that is ridiculous, so for now I'm just going the computer route (so I agree with you, but wish I didn't have to...). Reply
  • jabber - Monday, July 7, 2014 - link

    You can have 8TB in a dual bay NAS, more soon. A couple of those isnt going to break the bank.

    Thats 16TB....
    Reply
  • PEJUman - Monday, July 7, 2014 - link

    if you have 8TB with Raid 0, it will break the bank when one of the 8TB fails.
    JBOD is better for single gigabit + home use.
    Reply
  • tuxRoller - Monday, July 7, 2014 - link

    What windows only features are you making use of that prevents you from changing the os?
    Assuming that it's mostly a file server the only thing that comes to mind is windows media center's ability to record arbitrary TV shows.
    Reply

Log in

Don't have an account? Sign up now