Sort by: Newest, Oldest, Most Relevant
I have a day off, national holiday. What happened so far: - Internet outage since early in the morning. Still going on. - Unable to reach a human being at my ISP, so I hope they mean it when the computer voice says "we know it, we're on it". 🤣 - systemd (PID 1) crashed. Might be partially my fault, but meh. I take this as a sign to not do any computer stuff today. 🤣

matched #dvnigcq score:0.22 Search by:
(#ixpmzia) Ran a few tests. Copying data from the NAS’s encrypted ZFS pool to the USB disk’s encrypted btrfs runs at ~20 MByte/s. That is for a single 1 GB file of random data. Cold caches, `sync` included. That same USB disk with the same btrfs can sustain ~75 MByte/s when I use it on my workstation (i7-3770). And indeed, the `aes` flag does not show up in the output of `lscpu` on the NAS. I’ll try to tweak some things about this, but it might be time for an upgrade … 🫤 (Or I’ll have to re-think the entire thing somehow.)

matched #zyg6vzq score:0.19 Search by:
Search by 1 tags:
(#ixpmzia) The “annoying” thing about hardware these days is that it basically keeps working “forever”. At least much, much longer that you’d expect. Now that I think about it … I only remember *one* PC of mine actually dying because of a hardware failure – and that was probably because I did too much overclocking. 😂 If it wasn’t for changes in *software*, I could probably still use them all. I mean, why not, my Pentium 133 still works and I use it for gaming regularly. So … my little NAS probably won’t die any time soon. Hmmm.

matched #brdqqiq score:0.2 Search by:
Search by 1 tags:
(#ixpmzia) @mckinley Not really sure, to be honest. _Probably_ a couple hundred GB … ? 🤔 With the *changed* data, it might be half a TB to transfer? I’m just guessing. Let’s see how it goes next time. I don’t expect to add much data any time soon. (On the other hand, I’ll swap the USB disks for the next run, so it’ll take the same ~9 hours, again. Meh.) I think the solution is to have less data. 😈

matched #rqiwkva score:0.2 Search by:
Search by 1 mentions:
Search by 1 tags:
(#bghmkra) @prologic It always fetches the canonical feed URL and, when it can’t find the latest twt hash (that it saw in the previous run) it traverses the archived feeds until it does find it. Something along those lines. I just got one such notification: Date: Tue, 07 May 2024 15:56:01 +0200 From: me@pinguin To: me@pinguin Subject: [regularly][regularly=] jenny Fetching archived feed (configured as prologic, ) Fetching archived feed (configured as prologic, ) Fetching archived feed (configured as prologic, ) Fetching archived feed (configured as prologic, ) Fetching archived feed (configured as prologic, ) Now, your feed did *not* get archived, as far as I can tell. So why am I getting this then? Have you edited a twt just now? That would explain it. 😅

matched #5lzgj3q score:0.16 Search by:
Search by 1 mentions:
Search by 1 tags:
This is twtxt search engine and crawler. Please contact Support if you have any questions, concerns or feedback!