We have upgraded the community system as part of the upgrade a password reset is required for all users before login in.

How to estimate actual flash memory wear caused by SQLite3?



  • As flash memory blocks do wear out after a number of erase cycles, I wonder about the practical implications.

    The datasheet of the memory chip itself (AFAIK Winbond W25Q128FV for Omega2, W25Q256FV for Omega2+) states "More than 100,000 erase/program cycles".

    I guess this means the entire flash memory can be erased and reprogrammed a 100'000 times at least. This should be well enough for any forseeable number of firmware updates 😉

    However, the thing I'm looking at is a 24/7 running device, which has to store a few hundred, maybe up to 1-2k state changes (with ~50bytes actual data) per day into a SQLite3 database, stored on the JFFS2 overlay file system of OnionOS/OpenWrt. I find it hard to guess how much raw write activity a SQLite single row insert actually causes. Assuming 256 bytes should be enough including all SQLite overhead, each state change would need erasing/writing a block of the chip.

    Assuming 500 state changes per day, and assuming the writes would be levelled evenly over all available blocks (65536 of them), wearing out all blocks would take 35'000 years. That sounds like a lot of margin for a real world application!

    Only - are the guesses made above realistic? Can we rely on jffs2 wear leveling to distribute the writes that evenly? Are there SQLite house keeping operations that might cause much more writing? Anything else I might have forgotten?



  • @luz
    These are all good questions and the answers definitely should be of interested to everyone here.



  • @luz According to the serial console boot log of my Omega2+ it has got a Macronix chip:

    flash manufacture id: c2, device id 20 19
    find flash: MX25L25635E

    About that "35'000 years" - the Data Retention Time of this chip is more than 20 years (only ;-).

    You could find lots of interesting Application Notes on their site
    You may be interested:
    (Wear Leveling in NAND Flash Memory)
    Application Note for Wear Leveling of Flash Memory
    Program/Erase Cycling Endurance and Data Retention of Macronix SLC NAND Flash Memories
    Program/Erase Cycling Endurance and Data Retention in NOR Flash Memories
    and so on...

    Unfortunately I can't answer your question(s) but I'm very interested in your practical experiences and achievements.
    BTW I'm not so optimistic than you... but it's a feeling only.



  • This raises an important question for those of us developing IoT devices based upon Omega devices. Here in Australia I am required to provide a warranty but without having clarity on the life expectancy of these components it's difficult to come up with a sensible risk assessment of failure rates. With other IoT devices I have developed the storage was on removable SD cards so we factor in replacement costs of the card only.

    If anyone finds a sensible methodology for estimating life expectancy please post it!



  • Very much depend on the P/E pattern (for a database which needs to store critical information.) If a specific filed or a variable needs to be updated constantly, that sector will wear out fast.
    If that is the subject, now the question is how to deal with the it...
    (1) specialized hardware (that has indexing, write cache, TRIM, relocation capabilities) <-- that is what the SSD is designed for
    (2) software based, i.e., do it in RAM (enough RAM space allocated?, write-back before power is removed?, etc. etc.)
    I.e., there is no free lunch, use the right tool for the tasks at hand



  • @ccs-hello said in How to estimate actual flash memory wear caused by SQLite3?:

    Very much depend on the P/E pattern (for a database which needs to store critical information.) If a specific file or a variable needs to be updated constantly, that sector will wear out fast.

    I don't think that there is a direct file to sector mapping in JFFS2, so writing the same file (or part of file) 100'000 times will not write the same flash sector 100'000 times. It will cause 100'000 write operations of the size of the written data, but spread somehow (ideally: evenly) over the total of available flash blocks. That's what wear leveling is all about.

    The question is just how near (or far) the wear leveling strategy in JFFS2 is from really even distribution of writes, in actual practice.

    If that is the subject, now the question is how to deal with the it...
    (1) specialized hardware (that has indexing, write cache, TRIM, relocation capabilities) <-- that is what the SSD is designed for

    In general, yes, but the scope of my question is limited to the situation as present on a Omega2 (or similar), which is SPI NOR flash driven by JFFS2. The chip+JFFS2 are the "SSD" in an Omega2.

    (2) software based, i.e., do it in RAM (enough RAM space allocated?, write-back before power is removed?, etc. etc.)

    That's an important point. But AFAIK there is no internal mechanism in a MT7688 for powerfail writeback, so, as nice as this would be, we don't have that (or it would need to be built around the Omega2 with extra hardware).

    So the context for my question is really actually written-back-to-flash SQLite3 transactions and their impact on flash wear.

    Of course, my application already does as much buffering and pooling as possible in its intended use case, to minimize writes. But it boils down to these few hundred single SQLite table row updates per day with ~50 bytes of net change each.

    I.e., there is no free lunch, use the right tool for the tasks at hand

    I'm not asking for a free lunch 😉

    I'm just trying to figure out if the current lunch recipe might lead to a worn out flash too quickly or not...



  • @luz One more thing. Omega2's flash is an MTD device.

    Eraseblocks wear-out and become bad and unusable after about 10^3 (for MLC NAND) - 10^5 (NOR, SLC NAND) erase cycles

    MTD FAQ What are the differences between flash devices and block drives?

    Eraseblocks are larger (typically 128KB - unfortunately I don't know what about on Omega2) than those 256 byte flash pages 4KB flash sectors.



  • @György-Farkas said in How to estimate actual flash memory wear caused by SQLite3?:

    @luz One more thing. Omega2's flash is an MTD device.

    Yes, but "MTD" just means "Memory Technology Device", which is the Linux term for all memory-based storage types, including flash of all types (NAND, NOR etc.).

    Eraseblocks wear-out and become bad and unusable after about 10^3 (for MLC NAND) - 10^5 (NOR, SLC NAND) erase cycles

    The Omega2 SPI Flash seems to be NOR with 10^5 erase cycles.
    Both W25Q128FV and MX25L25635E have this number in the datasheet.

    MTD FAQ What are the differences between flash devices and block drives?

    Eraseblocks are larger (typically 128KB - unfortunately I don't know what about on Omega2) than those 256 byte flash pages 4KB flash sectors.

    cat /proc/mtd 
    

    reveals it:

    dev:    size   erasesize  name
    mtd0: 00030000 00010000 "u-boot"
    mtd1: 00010000 00010000 "u-boot-env"
    mtd2: 00010000 00010000 "factory"
    mtd3: 00fb0000 00010000 "firmware"
    mtd4: 001668cd 00010000 "kernel"
    mtd5: 00e49733 00010000 "rootfs"
    mtd6: 00980000 00010000 "rootfs_data"
    

    So at least at the mtd layer, it is operating with 64kB (0x10000) erase blocks.



  • @luz 'MTD and eraseblocks'
    I wanted to say: maybe you should calculate the lifetime with eraseblocks (xx KBytes) instead of flash pages (256 Bytes) and you should take into account only a part of the chip.

    So at least at the mtd layer, it is operating with 64kB (0x10000) erase blocks.

    Thanks! 🙂
    I've forgotten - I had already read this in the OpenWrt documentation. It seems I'm getting older... 😞 Maybe I should write everything down.



Looks like your connection to Community was lost, please wait while we try to reconnect.