The Top 15 Reasons Your Synology is SLOW (and how to fix them)

Video Statistics and Information

Video
Captions Word Cloud
Reddit Comments

Can anyone put these 15 reasons in a typed list? This is a30 minute video….

πŸ‘οΈŽ︎ 68 πŸ‘€οΈŽ︎ u/saladroni πŸ“…οΈŽ︎ Jun 04 2021 πŸ—«︎ replies

Wait. Number 12 is a problem?

πŸ‘οΈŽ︎ 6 πŸ‘€οΈŽ︎ u/minderasr πŸ“…οΈŽ︎ Jun 04 2021 πŸ—«︎ replies
  1. Only provide 1000M LAN on the latest Home use NAS..
πŸ‘οΈŽ︎ 4 πŸ‘€οΈŽ︎ u/Ok-Escape9025 πŸ“…οΈŽ︎ Jun 04 2021 πŸ—«︎ replies

MTU mismatch is not in this list? It's almost always the cause for my slow file transfers.

πŸ‘οΈŽ︎ 4 πŸ‘€οΈŽ︎ u/cdegallo πŸ“…οΈŽ︎ Jun 04 2021 πŸ—«︎ replies

Nice Input but to be honest it’s a way to long video for the message but I appreciate your engagement

πŸ‘οΈŽ︎ 3 πŸ‘€οΈŽ︎ u/Rez-1911 πŸ“…οΈŽ︎ Jun 04 2021 πŸ—«︎ replies

That was actually informative for someone like me who is not a networking or Synology noob.

You've come a long way since you first started making these videos and it's endearing to see you end the video with telling people how to hire you :)

Also, 14.4k subscribers compared 39.9k subscribers at NASCompares which I think has been around for much longer. Good job!

πŸ‘οΈŽ︎ 3 πŸ‘€οΈŽ︎ u/not_anonymouse πŸ“…οΈŽ︎ Jun 05 2021 πŸ—«︎ replies

For Linux clients, there are handful of default options that aren't ideal for max throughput and you might consider at least covering the major ones. For example, using "cache=loose" on CIFS mounting has a massive performance increase, albeit with some caveats:

cache=loose allows the client to use looser protocol semantics which can sometimes provide better performance at the expense of cache coherency. File access always involves the pagecache. When an oplock or lease is not held, then the client will attempt to flush the cache soon after a write to a file. Note that that flush does not necessarily occur before a write system call returns.
In the case of a read without holding an oplock, the client will attempt to periodically check the attributes of the file in order to ascertain whether it has changed and the cache might no longer be valid. This mechanism is much like the one that NFSv2/3 use for cache coherency, but it particularly problematic with CIFS. Windows is quite "lazy" with respect to updating the "LastWriteTime" field that the client uses to verify this. The effect is that cache=loose can cause data corruption when multiple readers and writers are working on the same files.
Because of this, when multiple clients are accessing the same set of files, then cache=strict is recommended. That helps eliminate problems with cache coherency by following the CIFS/SMB2 protocols more strictly.
Note too that no matter what caching model is used, the client will always use the pagecache to handle mmap'ed files. Writes to mmap'ed files are only guaranteed to be flushed to the server when msync() is called, or on close().
The default in kernels prior to 3.7 was "loose". As of 3.7, the default is "strict".

i would also recommend using larger send/receive buffers, etc. I can send you a list of recommended changes for /etc/sysctl.conf and smb.conf and if you're interested.

πŸ‘οΈŽ︎ 3 πŸ‘€οΈŽ︎ u/DopePedaller πŸ“…οΈŽ︎ Jun 05 2021 πŸ—«︎ replies

Thank you! Usually tech videos are too dry and boring or so useless that I never finish one unless I really want to get something from the video. Yours was lively and entertaining enough with a good deal of actual content that I watched the whole thing! Learned a few things that are nice to know, but I don't need (at least at the moment).

πŸ‘οΈŽ︎ 2 πŸ‘€οΈŽ︎ u/Githyerazi πŸ“…οΈŽ︎ Jun 04 2021 πŸ—«︎ replies

16 - You scheduled a file system scrub and forgot about it.

Happens to me about once a year.

πŸ‘οΈŽ︎ 2 πŸ‘€οΈŽ︎ u/yama1291 πŸ“…οΈŽ︎ Jun 04 2021 πŸ—«︎ replies
Captions
all right how's it going y'all today we're going to be going over the 15 most common reasons in my experience why a synology is slow when it comes to file transfers and so this is when you're dumping files to your nas and they're just not writing that quickly or when you're trying to get files from your nas to your computer and it's just kind of slow and so this list really comes from my own experience i do a lot of synology consulting i do a ton on synology obviously with this channel and looking through forms and everything like that and so this list all comes from my experience and so i'm going to be breaking up this list into three distinct sections based off what is the slowdown and i'm also going to be going over some of the ways to speed them up or at least diagnose the problem so you understand it better and so the first section is based off of network and that is by far the most common where it is really most common whenever synology is being slow it is due to the network for some reason the second most common one is the disks the actual disks in the synology and then finally the third section is based off of the actual synology itself which is basically the hardware within the synology being slow slash the operating system and i've got a few of those as well and so before we get into the list i did want to go over how to get a clean baseline for how your synology is performing so you can actually start figuring out how fast it is and see if it's you or if it's your synology and so this will be helpful and so what i would recommend doing is go to your home network and mount the synology as an smb share you can do this on both windows and mac os and i've got a video for both those that i'll leave in the description then if you're on windows you're going to go ahead and download a program called crystal disk benchmark and if you're on mac you're going to go ahead and download black magic speed test from the app store and these are the two most common ways to test the throughput of a disk or in this case an smb share and this is actually just going to give us the max sequential read and write speed but for most users that is actually how it feels fast or slow most people are only going to notice when it's slow if they're transferring large files over the network versus if it's not great in high performing iops most people do not have super high iops except for video editors and video editors still need sequential reads to be able to stream all those video files across the network and make sure that the timeline is not jittery when you're playing back and so it's not a perfect test but it is a good baseline that i would say encompasses most people and so i'm on a mac and you can see right here i've already mounted my smb share and so i'm just going to go ahead and select target drive select it and click open and just go ahead and click run and we're going to see what the metrics are and see how fast it's writing and reading from that drive and so you can go ahead and do this and it's a great way to kind of get a clean performance out of it it is not perfect it is not necessarily a full scientific study but it does give you numbers and you can start to see how things are affecting your performance in terms of a raw throughput and so this is what i use and then on pc it's very similar crystal disk benchmark all right and so now that you've got a way to gauge your performance we can go ahead and go down this list all right and so we're going to be starting with the first category which is your network being slow and the very first part of that is the fact that you are on wi-fi by far the most common issue with a slow nas and a slow file transfer is because you're on wi-fi your router might say it can transfer at 2.8 gigabit that is just not true you're never going to see those numbers that is basically the aggregation of multiple wi-fi antennas and multiple different protocols you are not going to see that and it is just not obtainable wi-fi speeds are just not going to get you dependable performance if you need that i'm not saying they're bad but if you're looking to figure out why your transfers are slow and you're on wi-fi it's probably because you're on wi-fi a simple test of this is move close to your wi-fi router if it speeds up it's your wi-fi network and that's just what it is wi-fi 6 is going to help a little bit but it's still not going to solve the problem with packets dropping with your range really causing a lot of slowdown i can get 70 megabytes per second read and write if i'm close to my wi-fi router as in about 10 feet line of sight away from it but if i step even one wall over it can drop down to 30 megabytes per second and if i'm upstairs it can be five megabytes per second it is just so unreliable and so with your expected performance with this i would say you could be anywhere from well realistically honestly one megabyte per second if you're kind of out of range up to maybe 70 megabytes per second is about the best i've ever been able to get you might be able to get a little bit more than that but quite frankly that's only when you're right next to the router and everything is perfect another thing you need to check is even if you've just plugged in an ethernet cable you need to make sure you're actually using that and not the wi-fi still so on a mac and i believe it's the same on a pc if you go ahead and you're mounted a share over wi-fi and you plug in an ethernet cable it is not going to unmount from the wi-fi and mount from the ethernet cable unless you force it to it will have any new connections go on the ethernet cable but until you disconnect from that share and reconnect it will keep using the wi-fi because it's not going to remove that from the protocol a common way to fix this is to turn wi-fi off and turn it back on but that is just one thing to remember and it's something that's caused me issues before all right and so the second most common one is you're just saturating your network connection a gigabit connection is fast when it comes to ethernet but in terms of transfer speeds it's not that fast a one gigabit connection optimally will give you 125 megabytes per second read and write that's because gigabit is measured in bits where file transfers are measured in bytes generally and every one byte is eight bits and so you divide your bit connection so one gigabit divided by eight is 125 megabytes per second and so that's often why people get confused why they're not getting one gigabyte per second is because they're getting one gigabit per second and so that is honestly not that fast and so one good connection is easily saturated most cases and unless you upgrade to a 10 gig connection you're really not going to be able to get better performance out of it then if you go up to 10 gigabit it's a lot harder to saturate it and so you'll probably get around one gigabyte per second maybe 1.1 gigabytes per second because there's just a lot more overhead and all that transferring is a lot more difficult and so those are your basic theoretical numbers and really the only way to increase that is to increase your network protocol or in special circumstances where there are multiple people connected on your network you can use what's called link aggregation but that is only going to help you if you're fighting for bandwidth between another user at identical times another thing to check for that i've actually seen is actually somebody had a 100 megabit switch in between their nas and where they were connecting and so with networking you are as slow as your slowest protocol in between and so you only would be getting 100 megabit speeds rather than gigabit speeds so that's something to check for number three is you're doing link aggregation and expecting to get better single user performance you are not going to get better single user performance with link irrigation because it only is increasing the number of pipes it is not actually increasing the bandwidth of a single pipe and so you're only going to be able to use one of those pipes at a time per user if you have multiple people on your network all connecting at once all transferring files at once having multiple pipes i.e link aggregation can speed things up but single users will not see any performance increase using link aggregation it's good for failover but you're not going to see any real performance increase whatsoever and so another thing you might see some tutorials online about a thing called smb multi-channel smb multi-channel is a windows protocol that allows smb to go over multiple link aggregated connections meaning if you have four gigabit connections it can transfer a single computer to a single computer at four gigabit this is a protocol that is very much still in alpha when it comes to samba it is enabled for testing and testing only do not enable this on a nas where you expect to keep your files non-corrupted it can cause file corruption due to packet mismatch and all of your data can be completely rendered useless by this it is only for testing they are working on adding it in but until they do do not try to add it in because it will only cause you issues and it can have silent file corruption it is not worth trying to enable do not follow those tutorials unless you're just messing around on a test bench and you want to see what happens all right and so now reasons four and five are actually both jumbo frames four is you do not have them enabled and five is you've got them enabled but they should not be enabled so jumbo frames are a life saver for 10 gigabit connections you're really not going to see any performance increase and you should not have them on for gigabit connections but for 10 gigabit connections they turn my 300 megabytes per second connection into a gigabyte per second connection all by just increasing my mtu from the standard 1500 to 9000. essentially what they allow you to do is cram more data into every network packet and that means that there are fewer network packets to transfer which means your switch and your network cards have a lot less overhead to process and so it can really increase your performance because that especially if you have a cheaper network card or cheaper 10 gig switch the thing is they are in no way part of the official ethernet standard a lot of manufacturers have implemented them but since it's not part of the standard everybody kind of did it their own way and some people just did not therefore you need to make sure that every single switch card everything your packets go through has jumbo frames enabled and is on the same mtu or greater and so if they're not you'll run into an issue called packet segmentation where a device is unable to process the full jumbo frame packet which is larger and can only process a small regular size packet it therefore has to do extra processing power to chop up that packet into smaller pieces and then send it over and that causes a huge bottleneck when it comes to your switch or whatever is having to do that chopping up and so that's why number five is jumbo frames are enabled but they are not supported and so you need to make sure everything that the packet goes through has jumbo frames enabled and even then you might have some issues one way to diagnose this is to turn jumbo frames off and see if you get better performance and also possibly turn down your jumbo frames from 9000 to 8000 just to see if anybody else is enabled jumbo frames but has in a slightly different manner and that way if a manufacturer has a different definition of what a jumbo frame is it'll hopefully still not get chopped up using packet segmentation and so that is one way to diagnose that and still get good performance with jumbo frames just using a smaller mtu but for users with 10 gigabit connections who are not seeing that full throughput that is a great place to start is enabling jumbo frames because it can give you a huge performance increase just make sure everything supports it alright and so now six and seven is smb issues so number six is you are using smb version one aka cifs so smb version one is a lot slower than its preceding versions and can really slow you down especially on 10 gigabit connections so i would really recommend turning off smb1 if at all possible enforcing at least smb2 connections to do that you just go into control panel file services smb advanced settings and then say minimum smb protocol smb2 and it might cause issues with you but if you can enable that that will mean that everybody's at least talking smb2 almost all protocols default to smb2 at least no normally smb3 but another thing is if you're mounting a mac make sure to type s b colon slash rather than cifs colon slash and so that is one thing i've found that really slows things down and so now issue number seven it's not as common as it was a few years ago but you've got signed smb packets turned on basically this means that your smb packets are being encrypted smb encryption is incredibly slow and really slows down a network especially over 10 gigabit speeds and older versions of mac os decided to turn this on by default and it really slowed down smb performance if you've updated your mac os version you should not have this problem but if you've got an older version of mac os there is a terminal command that you can easily look up and i'll leave it article to how to do it in the description below onto how to disable signed smb packets you can also go into transport encryption mode and make sure that it is on disable and that will also say hey no you're not allowed to try to encrypt smb packets and that way you do not run into that performance bottleneck smb is not a protocol meant for the internet it is meant for a trusted intern network and so having them encrypted is just going to cause you a huge slowdown and if you're accessing them over the internet you need to be using a vpn or a different protocol as smb is not for the internet all right and so now we're through the network issues now we're on to the disks that are in your synology and number eight is you just have too few disks this is most common with 10 gigabit connections as basically any one disk should be able to saturate a one gigabit connection by itself so this is really only for those 10 gigabit connections or if you've got multiple users into the way raid and shr work and for this i'm going to use raid and shr interchangeably because in this case they very much are basically the more disks you add into an array the more disks are able to be read and written from at the same time other than raid 1 but that's a special circumstance and so the more disks you have the faster your drives will be and the way i always estimate this speed is i say every disk has about 200 megabytes per second reading ranked it's a little bit lower that generally but 200 is a good easy number to guess around with and it will get you in the ballpark then you calculate how many discs are used for parity and so if you're rage 5 or shr 1 that's one disc shr2 or raid 6 that's two discs raid 10 raid 1 very different circumstances they're going to have their own calculations but what you do is you go through and subtract the number of disks used for parity from the total writable pool and so if you have six disks in shr one configuration that means you have five disks that can be written to simultaneously and that sixth one is used for parity math and you're also going to have five to read back from at the same time and so then you say okay every single disk is going to give me about 200 megabytes per second at best and so that means that five disks times 200 megabytes per second will give you about one gigabyte per second as your theoretical max read and write speed from that pool the parity math used in both shr and raid will take a little bit more of that down but 200 megabytes per second tends to be a good starting point for estimating this and so that's a good way to test if you've only got four disks in shr2 you're going to have a lot of trouble saturating a 10 gig connection because you're only going to have about 400 megabytes per second to read and write from and so that is a reason why people do have slower things luckily with technology if you have extra bays you can just expand that out and it will speed up your pool significantly another one that i want to throw in there is smr drives there are a few smr drives out there that are branded for nas currently it's just wd red the standard ones under like four terabytes i've not done a great job of figuring out exactly which units have smr but you want to avoid smr in any nas you've got just because it's going to slow you down a ton synology does not seem to have the same issue as truenas does when it comes to crashing upon a rebuild zfs which is the file system of truenast freenas when it encounters an smr drive because of the issues with that drive upon a rebuild it can take months to rebuild the pool because it is so slow and so i would recommend searching and making sure whatever drives you're buying are cmr drives because they will just not cause you headaches and a few programs have been set up where you can contact the manufacturer and say hey you sent me smr drives for nas i want cmr drives and so people have done that i'm not going to go into that in super depth here i probably need to do a video on it but if possible avoid smr drives number nine is one-year disks is slow or failing so i've actually had a couple circumstances where somebody bought six really nice nast drives and said oh i've got this cheap external drive i'll just shuck it and throw it in there as a seventh drive the thing is with raid and shr you are as fast as your slowest disk because everything has to be read and written from at the same time in parallel so if you have one slow disc your entire pool is going to operate at the speed of that one slow disc so every single disc you have will be going at 100 megabytes per second if your slowest disc can only go at 100 megabytes per second and so don't just throw in any random drive into your pool because it can really slow you down if you have all the same discs this can actually be a really good sign of failing drive and so the way you test this is you go through and you start off a disk test just like i've got here and then we're going to go into your nas and go into resource monitor you can then go through and click on disks view all and these all should basically have the same utilization amount see how they're all roughly the same that means they're all working at the same and not one of them is overworked as the other ones and so if one of these was like 90 plus that would mean it was probably one that was failing or was a bad disc compared to the other ones and so what you want to do is you want to back up all your crucial data and replace that drive because it will really help speed up your pool this is another way of telling if it's your disks or if it's your protocol is by hitting off a test and seeing what your total volume utilization is if this is above like 80 probably that's about where i'd say okay you're being limited by your disks and then anything under that is most likely being limited by something else and so now number 10 you have a write cache enabled so ssd write caches are something i really do not recommend for a few different reasons and this is one of them and so ssd caching can give you massive performance increase and so i've seen this i was able to saturate two 10 gigabit connections in link aggregation just using the ssd cache as the pool i'm reading from and so it has these huge performance increases but the thing is nvme drives and regular sata ssds as well are not ultra fast to write to throughout their entire storage area generally what they do to speed up everything is they've got about a 20 to 40 percent size of the entire drive is reserved for this super fast rideable area and it's kind of its own right cache and so any data you write to the drive will be first written to there because it'll send your entire controller going really fast and everything will go great but if you're transferring a massive amount of data that will start to fill up and once that fills up you can get so slow performance out of them because they've got to go through and they've got to clean up all that data while they're still accepting new data i've seen this where i've got performance that dipped from a gigabit per second all the way down to 200 megabytes per second because the cache was just being hit too hard and there was too much data being written to it and so that's one of the many reasons i would not recommend having a write cache because they can start to fill up and slow down your entire pool generally when you're writing data you're writing data sequentially which is something that hard drives are really good at doing especially in a raid so if you have six drives in a raid 5 array they will be able to write data at about a gigabyte per second and so by having that ssd cache in there that's first getting written to for all that data you can run into a bottleneck there where you just don't need one recaches do not have this issue because reading back from anywhere on the ssd is ultra ultra ultra fast and you actually get twice the storage and so really that's why i would highly recommend going for a read only ssd cache because it's just going to give you a lot better performance across the board and there are very few users who are going to get any appreciable performance increase from having read write and in my experience it's probably going to slow you down before it's going to speed you up with my testing and so a pretty easy way to test this is go ahead and transfer like a 100 gig file a huge file to your nas and if it's going super fast until about maybe 50 gigabytes in and then it starts to slow down for some reason it's almost certainly because you have that read cache and you're hitting the full internal cache of the nvme drives and so that's one thing to look out for and can really cause you to slow down overall you can still use the drives just set it up in a read only cache and you will have a much better time i promise now on 211 is your discs are full or need to be defragmented so full discs have very little area that's actually writable to them and so it can take a very long time to write data to them this is anything over about 80 probably is when you really start to notice a performance decrease because there's just not sections of data that can be easily written to so another side of this is defragmentation and that's where you've got a bunch of data that's been written deleted written deleted written deleted to the nas especially when it comes to small files by doing this over and over and over again your disks end up with these small packets of empty storage where they have to split up large files into tiny chunks to write them all over the disk so the way to solve this is called a defragmentation it does not play great with btrfs snapshots so if you do need to do this go ahead and delete any snapshots if you can and that way they won't balloon in size i found that btr of s snapshots can go up by about 10 percent in the worst case and so make sure to delete your snapshots beforehand and then do a defragmentation and then you can re-enable snapshot and it'll be like nothing ever happened generally you do not need to do a defragmentation but maybe once a year and that's only for people who have written a ton of data deleted time today written ton of data and i'm talking when i say a ton i'm saying like two or three times the entire storage volume of your nas and so 12 is our last section for disks and this is a very special circumstance that occurs with shr and multiple disk sizes which is also the reason why shr is not available on the enterprise nas units from synology so shr is great because it allows you to get more storage out of mixed drives than regular rate and so in the circumstance where you've got six four terabyte drives and two eight terabyte drives standard raid would just count all eight drives as four terabyte drives and you would lose basically eight terabytes of storage that would just be gone instead shr will do something a lot more efficiently it will still do the exact same thing as raid it will start by assuming all the disks are four terabytes and creating a raid across all eight of them basically assuming they're all four terabytes but then with that wasted space it'll create another raid volume in there it'll say okay there's now two four terabyte sections that we can use off of those eight terabyte drives and it will create a raid one array between those two or a raid5 array if there's three of them or more and so what it'll do is it'll actually use that extra space and create another raid volume and so it'll use that as total usable space for the volume and so this is great you now get an extra four terabytes in the circumstance and you're not wasting that space however if you think about it those four terabytes only can be written to two disks but in reality it's one disk of speed because the other one's used for parity and so if you're writing that very specific four terabyte section in this case you will actually get about 200 megabytes per second reading right from it because you're only being able to write to that one small section and it doesn't have a lot of disks to expand outward and get better performance out of and so that is actually why it's not part of the enterprise users but it is a very special circumstance and if you don't have mismatched disk sizes you're not going to run into this and it's easy to solve by just upgrading additional disks if you really need it if you then upgrade all your disks to eight terabytes you would then once again have total performance across all of them and so that is just one thing to note and so you could see a little bit weird performance where it's great for most of the time but then all of a sudden sometimes it drops down your performance and so that might be why in that special circumstance now we're moving on to the synology being slow and for 13 it's your drive is encrypted but you don't have hardware to handle it so to encrypt data there's a lot of math involved do that and that cpu has to handle that and if the cpu is not built to do that it's actually going to be pretty slow because there's a lot of stuff that has to happen most higher performing synology units have a special encryption chip which allows for the cpu to just say hey encryption that's a special command that goes really fast and my nas has that encryption chipset and so i've not been able to notice really any difference between reading and writing from a encrypted or non-encrypted pool they were pretty much the exact same but if you do not have dedicated encrypted hardware you can really notice a slowdown especially if your cpu is busy doing other things so that is just one thing to note most higher performing nas units should have the encryption and it should not be a problem though 14 is that you've actually got synology drive server as a team folder enabled on whatever shared folder you've got i have found this in my own testing where i'm trying to do a 10 gigabit smb transfer of a huge amount of photos from my camera to the actual volume but i had it enabled with synology drive that meant that synology drive was indexing all the files coming in and having to go through with photos create previews and also make sure that any other drives were updated in the right folders and so it really took a hit for my performance and so what i would recommend doing for people who want to use synology drive is i would recommend having your synology drive folder be a separate volume then your bulk storage where you need really fast read and write speeds most people are not going to be syncing the terabytes of files between their computer and their synology and so there's no reason to have the giant bulk dump data be a synology drive folder instead have a separate volume for that and then have your fast volume that is just one thing i noticed when i was trying to figure out why i was having a slow transfer and finally 15 it's your nas it's your nas is just too slow or busy specifically it's probably the cpu but if you don't have a lot of ram or if you've got a lot of ram dedicated to virtual machines it might also be you're running out of ram and so things have to be swapped to the disk and back really slowing down your nas a good way to test this is to be able to dump a ton of data to your nas with all your services turned on so docker containers vms everything get a benchmark there then turn all those services off and do the same benchmark if the performance increases significantly it's probably that your nas is just too busy doing too many things and from there there's not a ton you can do you either have to kind of turn off services live with a performance or upgrade your unit because there's just really no way of solving this a reboot might help but in most cases probably not all right well that's it for my list i hope you enjoyed this i hope this was helpful if you want to go ahead and hire me there's a link for that description and if you want to start sponsoring the channel there's a link for that as well alright have a good one bye you
Info
Channel: SpaceRex
Views: 31,823
Rating: undefined out of 5
Keywords:
Id: AYEEfAI-Upo
Channel Id: undefined
Length: 29min 1sec (1741 seconds)
Published: Fri Jun 04 2021
Related Videos
Note
Please note that this website is currently a work in progress! Lots of interesting data and statistics to come.