![]() ![]() Random reads and smaller size reads will surely be much faster compared to the pool of disks, but I'm confused about sequential reads and the docs do not explain properly. I want to stripe this device as a cache vdev, so total L2ARC would be ~6.4TB. You can view additional stats about this drive here to give you an idea of the performance it is capable of. It features 2 seperate pools of 3.2TB, each with x4 lanes, and I want to stripe both of these together for the combined speed and IOPS (would use x8 PCIe lanes for this). The device that I want to use as an L2ARC cache vdev is an Intel P4608 enterprise SSD which is an x8 PCIe3 SSD device. Keep in mind "business equipment" certainly qualifies for such a dev server so you won't be paying taxes on it if you itemize as well.I'm trying to tune L2ARC for maximum performance but I'm having trouble understand how exactly it operates in ZFS. Realistically, $2-3k/year is 2-5% of the sort of salaries we see on HN given we make a living at this sort of isn't surprising people would spend that kind of money to me. I could easily see someone who purely works from home (rather than 1-2 days a week) operating with a larger budget and genuinely needing a ZFS setup of 2TB SSDs. I generally spend ~$2.5k on the dev server. Monitors/peripherials/desk/chair generally eats another $2k and are of a similar age. My gaming/work desktop is ~4 years old, represents ~$2.5k of that budget. Personally I run a gaming (windows) desktop at home and an always-on UPS-backed homelab server that handles minor ops tasks (mostly backing up side projects and some ETL) + provides dev VMs. I was contemplating a build with 2 of these in a RAID 1 configuration for my next homelab server. > Who has enough money to get a mutli-TB SSD for SOHO?! Dozens to small hundreds aren't normally a problem, in my use case I don't notice performance problems. The known problem with Btrfs snapshots is that they are deceptively cheap to create, but become expensive later on to delete due to back reference searching, freeing extents (or not if they're still held by other snapshots) and updating metadata. I use snapshots quite a bit both for data, backup/replication with send/receive, and for root fs, and haven't had problems with it. If you're silently getting missing files on a send/receive, that's a bug and should be reported. The need for a snapshot to be on disk when sending, what you cite says you get an obscure error "stale NFS file handle" not that you get missing files. you don't end up with a partial or corrupt snapshot after you reboot. They are still atomic in that the operation either did completely happen or not at all happen, if a crash occurs after the command returns, i.e. If you have a mandatory flush to disk anytime there's a snapshot, they suddenly become a lot more expensive. Who has enough money to get a mutli-TB SSD for SOHO?! ![]() > It’s laughable that the ZFS documentation obsesses over a few GB of SLC flash when multi-TB 3D NAND drives are on the market > best level of data protection in a small office/home office (SOHO) environment. I think it's just a package install away on many Linux distros? Also installable on macOS - I had a ZFS USB disk I shared between Mac and FreeBSD.Īlso it's interesting that these two sentences appear in the same article: > can be a pain to use (except in FreeBSD, Solaris, and purpose-built appliances) ZFS generally doesn't care if it's IDE, SATA, NVMe or a microSD card. Why should a filesystem care about NVMe? It's a different layer. > And no one is talking about NVMe even though it’s everywhere in performance PC’s. ZFS never really adapted to today’s world of widely-available flash storage: Although flash can be used to support the ZIL and L2ARC caches, these are of dubious value in a system with sufficient RAM, and ZFS has no true hybrid storage capability. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |