328 0

Full metadata record

DC FieldValueLanguage
dc.contributor.author원유집-
dc.date.accessioned2019-12-07T20:26:12Z-
dc.date.available2019-12-07T20:26:12Z-
dc.date.issued2018-04-
dc.identifier.citationACM TRANSACTIONS ON STORAGE, v. 14, no. 2, Article no. 17en_US
dc.identifier.issn1553-3077-
dc.identifier.issn1553-3093-
dc.identifier.urihttps://dl.acm.org/citation.cfm?doid=3208078.3162614-
dc.identifier.urihttps://repository.hanyang.ac.kr/handle/20.500.11754/118422-
dc.description.abstractIn this work, we develop the Orchestrated File System (OrcFS) for Flash storage. OrcFS vertically integrates the log-structured file system and the Flash-based storage device to eliminate the redundancies across the layers. A few modern file systems adopt sophisticated append-only data structures in an effort to optimize the behavior of the file system with respect to the append-only nature of the Flash memory. While the benefit of adopting an append-only data structure seems fairly promising, it makes the stack of software layers full of unnecessary redundancies, leaving substantial room for improvement. The redundancies include (i) redundant levels of indirection (address translation), (ii) duplicate efforts to reclaim the invalid blocks (i.e., segment cleaning in the file system and garbage collection in the storage device), and (iii) excessive over-provisioning (i.e., separate over-provisioning areas in each layer). OrcFS eliminates these redundancies via distributing the address translation, segment cleaning (or garbage collection), bad block management, and wear-leveling across the layers. Existing solutions suffer from high segment cleaning overhead and cause significant write amplification due to mismatch between the file system block size and the Flash page size. To optimize the I/O stack while avoiding these problems, OrcFS adopts three key technical elements.First, OrcFS uses disaggregate mapping, whereby it partitions the Flash storage into two areas, managed by a file system and Flash storage, respectively, with different granularity. In OrcFS, the metadata area and data area are maintained by 4Kbyte page granularity and 256Mbyte superblock granularity. The superblock-based storage management aligns the file system section size, which is a unit of segment cleaning, with the superblock size of the underlying Flash storage. It can fully exploit the internal parallelism of the underlying Flash storage, exploiting the sequential workload characteristics of the log-structured file system. Second, OrcFS adopts quasi-preemptive segment cleaning to prohibit the foreground I/O operation from being interfered with by segment cleaning. The latency to reclaim the free space can be prohibitive in OrcFS due to its large file system section size, 256Mbyte. OrcFS effectively addresses this issue via adopting a polling-based segment cleaning scheme. Third, the OrcFS introduces block patching to avoid unnecessary write amplification in the partial page program. OrcFS is the enhancement of the F2FS file system. We develop a prototype OrcFS based on F2FS and server class SSD with modified firmware (Samsung 843TN). OrcFS reduces the device mapping table requirement to 1/465 and 1/4 compared with the page mapping and the smallest mapping scheme known to the public, respectively. Via eliminating the redundancy in the segment cleaning andgarbage collection, the OrcFS reduces 1/3 of the write volume under heavy random write workload. OrcFS achieves 56% performance gain against EXT4 in varmail workload.en_US
dc.description.sponsorshipThis research was supported by Basic Research Lab Program through the NRF funded by the Ministry of Science ICT&Future Planning (No. 2017R1A4A1015498), the BK21 plus program through the NRF funded by the Ministry of Education of Korea, the ICT R&D program of MSIP/IITP (R7117-16-0232, Development of extreme I/O storage technology for 32Gbps data services), and the Ministry of Science ICT& Future Planning under the ITRC support program (IITP-2016-H8501-16-1006) supervised by the IITP.en_US
dc.language.isoen_USen_US
dc.publisherASSOC COMPUTING MACHINERYen_US
dc.subjectLog-structured File Systemen_US
dc.subjectFlash memoriesen_US
dc.subjectGarbage Collectionen_US
dc.titleOrcFS: Orchestrated File System for Flash Storageen_US
dc.typeArticleen_US
dc.relation.no2-
dc.relation.volume14-
dc.identifier.doi10.1145/3162614-
dc.relation.page1-26-
dc.relation.journalACM TRANSACTIONS ON STORAGE-
dc.contributor.googleauthorYoo, Jinsoo-
dc.contributor.googleauthorOh, Joontaek-
dc.contributor.googleauthorLee, Seongjin-
dc.contributor.googleauthorWon, Youjip-
dc.contributor.googleauthorHa, Jin-Yong-
dc.contributor.googleauthorLee, Jongsung-
dc.contributor.googleauthorShim, Junseok-
dc.relation.code2018004675-
dc.sector.campusS-
dc.sector.daehakCOLLEGE OF ENGINEERING[S]-
dc.sector.departmentDEPARTMENT OF COMPUTER SCIENCE-
dc.identifier.pidyjwon-
Appears in Collections:
COLLEGE OF ENGINEERING[S](공과대학) > COMPUTER SCIENCE(컴퓨터소프트웨어학부) > Articles
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

BROWSE