Do SSD drives slowdown when near full?

Sharky Forums


Results 1 to 7 of 7

Thread: Do SSD drives slowdown when near full?

  1. #1
    Hammerhead Shark yngwie98's Avatar
    Join Date
    Oct 2000
    Location
    West Chester, PA, USA
    Posts
    1,932

    Arrow Do SSD drives slowdown when near full?

    Thinking of upgrading my 3.5 year old laptop to a 128GB Crucial m4 SSD. Currently using ~130GB out of 320, but am gonna reinstall Vista on the new drive. I could probably get by with ~100GB. But will that slow down the drive with it being near full? Should I get a 256GB or hybrid HD instead?
    Last edited by yngwie98; 07-04-2012 at 01:08 PM.
    Friends don't let friends drink cheap beer.

    BSF - we know drama.
    OC Crusaders
    Sharky Forums Folding Team

  2. #2
    Great White Shark
    Join Date
    Nov 2000
    Location
    Alpharetta, Denial, Only certain songs.
    Posts
    9,925
    Short answer: yes and no.

    Longer answer. (Basic structure first). SSD's are made up of flash. Flash is made up of cells, which store 1-3 bits per cell depending on the type of flash. cells are arranged in pages which are the smallest writable area available. Pages are arranged into blocks, which are the smallest erasable area available.


    SSD's can slow down long before they are completely full. They slow down when every block/page/cell of flash has been written to. At that point you run into a situation called a read/modify/write. This is when data needed to be written that is larger than the unused area available in a block. What has to happen then is the partial data is read out of all of the existing pages in a block. It then erases the block entirely. It then writes back the old data + the new data into the pages it needs in the block, leaving the rest marked as "unused." TRIM helps alleviate this by cleaning up blocks/pages/cells into this sorted out state a bit more proactively.

    No. Most modern SSD's make good use of TRIM and garbage collection so they won't slow down past a certain point, and never slower than a traditional hdd, even when more or less entirely full.

    With SSD's it's more or less how much data you turnover (write/delete), not how much you store.

    Crusader for the 64-bit Era.
    New Rule: 2GB per core, minimum.

    Intel i7-9700K | Asrock Z390 Phantom Gaming ITX | Samsung 970 Evo 2TB SSD
    64GB DDR4-2666 Samsung | EVGA RTX 2070 Black edition
    Fractal Arc Midi |Seasonic X650 PSU | Klipsch ProMedia 5.1 Ultra | Windows 10 Pro x64

  3. #3
    Hammerhead Shark yngwie98's Avatar
    Join Date
    Oct 2000
    Location
    West Chester, PA, USA
    Posts
    1,932

    Arrow

    Thanks for the reply. My laptop has Vista (home premium 32-bit) which doesn't natively support TRIM. Ordered the drive, guess I'll find out. Could do the $40 Windows 8 upgrade if I have any major isues.

    http://windowsteamblog.com/windows/b...for-39-99.aspx
    Friends don't let friends drink cheap beer.

    BSF - we know drama.
    OC Crusaders
    Sharky Forums Folding Team

  4. #4
    nuclear launch detected kpxgq's Avatar
    Join Date
    Jun 2001
    Location
    texas
    Posts
    16,612
    i honestly think TRIM is overrated for the average user... GC (garbage collection) on a modern SSD is adequete
    bitfenix prodigy, i5 4670k, asrock z87e-itx, zotac gtx 970, crucial m500 msata, seasonic x650, dell st2220t

  5. #5
    MakoSharkero bldegle2's Avatar
    Join Date
    Jun 2001
    Location
    Floyd, VA, usa
    Posts
    3,044
    typically you don't want to fill your SSD to more than 75% capacity or it will slow down a bit, garbage collection needs a bit of room to work right...that way the nand cell usage can be spread over the whole drive so one part of the drive will not die before another

    regular spinning HD's (especially the main OS drive.partition) shouldn't exceed 85% or you will run into fragmentation/defragging problems....

    nature of the beast...

    I have been through a couple of SSD's, some worked, others just were bad from the start. When they work right, insanely fast...they still have some maturing to go, but they are getting much better....

    Just make sure you backup regularly, when they go it can be quick and painful to your data...

    You can reduce the size of your install by eliminating Hiberfil first thing after OS istall(SSD's can have problems with Hiberation)...there are some other thangs you can do to reduce your OS footprint...My win7 Ultimate 64 bit only takes up about 20 gigs of the 120GB drive I use for OS only, 10,000rpm, 600GB WD spinner as secondary drive for everything else....

    laterzzz.....
    Last edited by bldegle2; 07-15-2012 at 07:09 AM.
    I am gettin too old for all this st.ff!

    Specs? it runs.................

    Tbird quotes:

    "I dont care that much for gaming"
    "I am done with 3dmark."

    AsRock 970 Extreme4,Vishera 8320 @4.6, Vertex 4 256GB SataIII SSD, 2xVelociraptor 600GB 10,000 spinner in raid 0 storage....16g Gskill DDR3 2133 @2292, ATI 6850, back on huge air (quiet)....HP Laptop redone OS (ie, no HP krud), AMD Phenom II N620, 8gig DDR3 1333 ram, Sanddisk SataII 120GB SSD, Toshiba 500GB 7200 spinner...

  6. #6
    LOLWUT ImaNihilist's Avatar
    Join Date
    Nov 2001
    Location
    San Francisco
    Posts
    14,034
    Quote Originally Posted by bldegle2 View Post
    typically you don't want to fill your SSD to more than 75% capacity or it will slow down a bit, garbage collection needs a bit of room to work right...that way the nand cell usage can be spread over the whole drive so one part of the drive will not die before another

    regular spinning HD's (especially the main OS drive.partition) shouldn't exceed 85% or you will run into fragmentation/defragging problems....
    This is not true.

    A spinning HDD sees performance degradation for every additional bit written, because of the distance you have to travel across the platters. Once you go over 50%, it will really start to feel a lot slower than it was when it was new. In some cases, 50% capacity means a 25% performance penalty. A good drive will see a 25-30% reduction in performance around 90% capacity. Either way, you see a marginal decline for both reads and writes with every bit written.

    In theory, an SSD at 10% capacity and an SSD at 90% capacity should be the same speed, at least with respect to read performance. They will even have the same write performance the first time you write to the entire disk. The problem with an SSD comes with "random writes" and fragmentation. If all your writes are sequential, in nice little blocks, you won't really see any performance degradation until the entire drive has been written to once. The performance problems with SSDs start when you have to re-write, and when you do it in a non-sequential way.

    As kpxgq pointed out, this really isn't a problem for most people. In fact, it isn't a problem AT ALL until you fill the drive at least once. TRIM is useful, but not necessary unless you are "deleting" a lot things. When you delete something in your OS, the file isn't actually deleted—the markers are just removed that indicate the file exists. On an SSD a file that has been deleted still takes up space on the SSD—the cell is never "emptied", just looked over. When you go to re-write, the SSD has to look around and see if there are any empty cells. That takes time. If there aren't any empty cells, it has to empty the cell first, then write, which takes more time. With TRIM, when you delete a file at the OS level, the computer will tell the SSD, "Hey, you can empty that cell now." That way when the computer goes to write, it sees empty cells instead of full cells that need to be emptied and then re-written. The controller also keeps track of these blocks so it knows where they are.

    The problem that TRIM solves is a problem that most people don't have, TBH. Most people will never write the full capacity of their drive, let alone write and re-write. If you are doing video editing TRIM can be quite helpful. It's also helpful for applications like Steam which are constantly updating (deleting, re-writing) large amounts of files.

    The biggest problem with SSDs is how we interact with them at an OS level. We still use this idea of "free space" and "used space", but with an SSD that isn't really how it works. Instead there are three states: full, not in use, and empty. The middle state, "not in use", is not something that the SSD inherently understands—that's where TRIM comes in. It would be helpful to see this at an OS level. Knowing how much "empty" space you have (and where it is) is really the most important thing in determining write performance. Most controllers are pretty good about managing this kind of fragmentation in the background, but you can still get some slow downs in write performance in you have large blocks of "not in use" space that needs to be rewritten to. You really only get read performance slow downs in extreme fragmentation situations. I've never encountered them in the real world. I suppose you would have these kinds of problems in a database which is writing lots of small files all over the place, but it's unlikely to encounter a scenario on the desktop where a modern SSD is going to experience massive read performance degradation.

    For a more thorough explanation of exactly what goes on with an SSD, see here: http://www.anandtech.com/show/2738/8 The whole article is worth reading, but that page in particular highlights the main issue with SSDs.
    Last edited by ImaNihilist; 07-15-2012 at 09:42 PM.

  7. #7
    Great White Shark
    Join Date
    Nov 2000
    Location
    Alpharetta, Denial, Only certain songs.
    Posts
    9,925
    Quote Originally Posted by ImaNihilist View Post
    This is not true.

    A spinning HDD sees performance degradation for every additional bit written, because of the distance you have to travel across the platters.
    This is the reason that in a lot of IT environments before the advent of SSD's and relatively cheap RAM, "short stroking" was a common practice. Unlike CD's, HDD's write from the outside edge of the platter inwards. This means two things. 1) as you move towards the middle of the platter, each revolution of the platter passes fewer sectors beneath the r/w head. 2) as ImaNihilist said, as you fill up the disk, random reads and writes take longer as well, as you can have wild swings of the r/w head back and forth across the full surface of the platter.

    Getting back to "short stroking" (insert your favorite joke here). This was the practice of using an extremely small percentage of the HDD's full capacity to ensure that all of your data was being written to and read from the fastest sections of the disk, as well as minimizing the distance that the r/w head had to travel to do so. Entire banks of hdd's would only be formatted to use 10-20% of the listed capacity to ensure maximum speed of the storage backend. Extremely wasteful, but at the time it was the only way to really increase performance of something like a database that was too big to fit in RAM.
    Last edited by James; 07-16-2012 at 08:27 AM.

    Crusader for the 64-bit Era.
    New Rule: 2GB per core, minimum.

    Intel i7-9700K | Asrock Z390 Phantom Gaming ITX | Samsung 970 Evo 2TB SSD
    64GB DDR4-2666 Samsung | EVGA RTX 2070 Black edition
    Fractal Arc Midi |Seasonic X650 PSU | Klipsch ProMedia 5.1 Ultra | Windows 10 Pro x64

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •