Download
flash memory n.
Skip this Video
Loading SlideShow in 5 Seconds..
Flash memory PowerPoint Presentation
Download Presentation
Flash memory

Flash memory

132 Views Download Presentation
Download Presentation

Flash memory

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Flash memory File system organisation issues Nick Gaens

  2. Outline Introduction Technologies How does it work? Limitations File systems: problems and workarounds

  3. Quick introduction Flash memory is a non-volatile computer storage chip that can be electrically erased and reprogrammed. – Wikipedia

  4. Quick introduction Usage: almost everywhere. Latest trend: SSD’s, successors of HDD’s.

  5. Quick introduction Overall level of research activity: quite low. How come? Extremely low cost-effective due to expensiveness and low life expectancy. Recent: on the rise (IBM’s nanocrystals, terabyte thumb drives …).

  6. Technologies

  7. Technologies

  8. How does it work? NAND chip consists of blocksconsisting of pages. Block: smallest unit of erase operation Page: smallest unit of read / write operation

  9. How does it work? Each page has one of the next statuses: “alive” (contains new, valid data), “dead” (contains old data) or “free” (can be written to).

  10. How does it work? Data is written on each page once, thus no rewrite of data in the same location. So updating data requires: • find a new page, write data in it and mark it “alive”; • mark the previous page as “dead”. Problem: data wasn’t actually erased, so free space is worn out.

  11. How does it work? Garbage collector converts “dead” pages to “free” ones. So erasing data requires: • read all “alive” pages of a block; • write them all to an empty block; • delete the contents of the entire block of 1. and mark it as “free”.

  12. Limitations A block can endure a limited amount (106) of erase cycles before becoming unusable. How to expand the lifetime of flash drives? Introduction of a wear-leveling policy which spreads out erase operations on all blocks of the memory.

  13. Limitations The erase operation is very slow, due to the composition of three required steps. How slow? 5 times slower than reading, 2 times slower than writing. Impact on flash database design: effect on usage of tree structures (e.g. B+-Tree’s).

  14. File systems Traditional file systems (such as NTFS, FAT(32), HFS(+), UDF and ext2/3/4) are most frequently found to be used with disk based data storage devices (HDD’s, DVD’s). Using these FS’s on flash based storage devices is quite opportunistic and cheap, though naïve, minimizing performance gains and lowering flash memory’s lifetime.

  15. File systems How come? Erase operation of flash memory is explicit and expensive, thus better scheduled when idling. (Disks don’t require such scheduling at all.)

  16. File systems How come? Flash memory devices impose no seek latency, thus randomly accessing memory locations doesn’t cause a performance disaster. Disk file systems are however optimized to avoid disk seeks whenever possible, due to the high cost of seeking on disk based devices.

  17. File systems How come? Flash memory devices tend to wear out when a single block is repeatedly overwritten. Wear-leveling: a necessity. (Flash file systems are designed to spread out writes evenly.)

  18. Workarounds Adapt the existing FS’s by adding a layer on top of them, the Flash Translation Layer. This layer takes care of the introduced constraints and restrictions of flash memory.

  19. Workarounds Log-structured File System Conventional file systems: great care for spatial locality and in-place changes to their data structures (due to slow seeking of magnetic disks). Hypothesis: an ever-increasing amount of system memory makes the above obsolete.

  20. Workarounds Log-structured File System A lot of available system memory would lead to I/O becoming extremely write-heavy. (Reads can be done from memory cache.) How to exploit this (hypothetical) situation?

  21. Workarounds Log-structured File System Treating storage as a circular log and writing sequentially to the head of that log to maximize the write throughput. (Positive side effects of this technique are snapshotting, improved crash recovery and tampering the GC by divide and conquer.)

  22. Workarounds Workarounds remain what they are … just workarounds. A native flash file system can by-design provide an environment in which the performance isn’t limited by any ‘extra’. (Examples are JFFS(2), YAFFS, TrueFFS and ExtremeFFS.)

  23. Flash file system However, in practice, flash file systems are only used for "Memory Technology Devices“. MTD’s are embedded flash memories that do not have a controller which takes care of the FTL or any other workarounds. Most commercial flash memories do have such a controller. (E.g. SD, SSD)

  24. Flash file system These controllers remain to offer increasing levels of performance, causing the call for applying a native flash file systems to be silenced. Also, benchmarks that directly compare flash FS’s to traditional ones cannot be done that easily.

  25. Conclusions Flash memory provides new levels of raw performance to storage techniques, although they do have some issues / caveats. Increasing affordability and feasibility of consumer-leveled flash-based mass storage devices. Consequence: ‘naked’ file systems are quite dumb when it comes to interfacing with flash memory.

  26. Conclusions Solution (?) by providing all sorts of high-performance workarounds that take care of the issues mentioned before. Native flash file systems don’t need such workarounds at all, making them attractive. In practice, those flash FS’s are of little use, due to their requirement of the absence of e.g. controllers.

  27. Discussion How many of you do own an SSD? Are you aware of the limited lifetime expectancy of such devices?

  28. Discussion Co-presentation: need of advanced data structures for allowing Game AI algorithms to perform faster on e.g. range queries for large amounts of NPC’s. Underlaying cause of this need is the lack of high-performance mass data storage. Does the uprise of flash memory make this research obsolete?