I do it the other way around: first write (zero wipe), then read (SMART long test). Served me well for many disks. :)
I do it the other way around: first write (zero wipe), then read (SMART long test). Served me well for many disks. :)
I never had a bad block on a brand new drive. At least not a reported/ detectable one. So if it’s truly a “bad block” (how exactly is it reported in your SMART data?) I would exchange the disks.
The Amazon.de “Black Friday Week” price of the tracked disk is 306.99 EUR. So basically back to the price of 2 month ago before the extreme price hike. No “deal”, just people who had to buy during the last 2 months go to pay a lot extra on top of it.
Dunno how the “Easystores” match of with the “Elements” or what the exact difference is, but 199 USD at least sounds like a deal. :)
I’m on a Mac. Actually never had to reinstall my OS (usually install once when the computer is new). I still keep my actual data separate from the system installation disk/ device. It’s just the smart thing to do.
But maybe I’m too old-school… back in the Amiga days we had a write-protected system floppy disk (aka the “Workbench” boot disk) and then we kept our actual data on some other (writable) floppy disks. I moved that mentally over. Now I shuffle hard disk drives instead floppies, but the concept is basically the same. :)
If out of other options just do a simply zero format (e.g. diskutil zeroDisk diskX
on macOS), and a long SMART test afterwards (e.g. smartctl -t long diskX
). That’s what I do with my new disks and it served me well so far. For large capacity disks it is like a heavy 2 day process (1 day formatting, 1 day testing), but it gives me a piece of mind afterwards.
Extra Hint: During any SMART long test make sure to disable disk sleep in your OS for the time, else test will abort (e.g. caffeinate -m
on macOS). Also avoid crappy external enclosures that put the disks to sleep by themselves (or you may want to run a script that regularly reads a block from the disk to keep it awake.)
Here’s my macOS script to handle the job (I needed it recently because a temporary crappy USB enclosure). It reads a block every 2 minutes via raw I/O w/o caching involved (“/dev/rdisk”)
#!/bin/bash
# $Id: keepdiskawake 64 2023-10-29 01:55:56Z tokai $
if [ "$#" -ne 1 ]; then
echo "keepdiskawake: required argument required (disk identifier, volume name, or volume path)." 1>&2
exit 1
fi
MY_DISKID=`diskutil info "${1}" | awk '/Device Identifier:/ {print $3}'`
if [[ ! -z "${MY_DISKID-}" ]]; then
printf '\033[35mPoking disk \033[1m"%s"\033[22m with identifier \033[1m"%s"\033[22m…\033[0m\n' "${MY_DISKNAME}" "${MY_DISKID}"
MY_RDISKID="/dev/r${MY_DISKID}"
echo "CTRL-C to quit"
while true; do
echo -n .
dd if="${MY_RDISKID}" of="/dev/null" count=1 2>/dev/null
sleep 120
done
else
echo "keepdiskawake: Couldn't determine disk identifier for \"${1}\"." 1>&2
exit 1
fi
I similarly look for a solution to that. Something where I simply can get an RSS feed for each show I want to track would be perfect (similar like tvkingdom.jp for Japanese TV shows).
I used to have a widget for the old OS X Dashboard once upon a time (the Dashboard, the widget, and the site/service it was using… all R.I.P.) that did the job for me. Yikes, I miss that!
If you improperly eject a disk while the filesystem is in a flux state it doesn’t matter which disk you use you’re very likely encounter that issue again. More so with some filesystems than others. APFS is for some reason worse in this regard, so best stick with the traditional “HFS+ w/ Journaling” on a Mac.
If you transfer large collections of data you could/ probably should use rsync
and not the Finder, preferably in a screen
or tmux
session. That way any crash of any of any the UI components will not mess up the copy process (even if Terminal.app goes down you’ll be able to reconnect to the screen/tmux session with the copy process still doing its thing). Also make sure your external disk has proper power all the time during the process (preferably do not attach another device during that time.)
May help to get you down that rabbit hole for further research: https://www.newsgroupreviews.com/par-files.html
“Thousands of years”. How the *bleep* could they know? You love claims like that. I still remember when they said that CDs hold forever. :)
If it’s on a Mac there should be metadata, check with:
mdls /path/to/your/file
Usually there’s some kMDItemWhereFroms
or similar.
I always buy new. But the used prices are typically not cheap enough here to make them worthwhile. If I only can save like ~20 EUR per disk, I rather go new than take a used/refurbished one with higher risk attached. I don’t buy below 18TB anymore though, so the market for tiny drives like 6TB may be a different situation.
My recommendation check the TB not the drive price. You probably save a lot when you buy larger capacity disks.
The answer to those questions is always: rclone
(still works fine with my mega account, no issues).
I don’t see a problem with manually confirming the account in a web browser though. Unless you want to automatize mass account creation? Then you’re on your own. :)
A little bit shell magic around a little Python3 helper will do the job quickly:
for i in $(seq 1 10); do grablinks.py --fix-links 'https://forums.aida64.com/topic/667-share-your-sensorpanel/page/'"${i}" --search 'file/attachment.php?' -f 'wget -c -O '\''%text%'\'' '\''%url%'\' | fgrep '.sensorpanel'; done | tee fetchscript.sh
, then verify the generated shell script, and finally:sh fetchscript.sh
.Happy waiting! :)
You can grab my
grablinks.py
Python3 script from here: https://github.com/the-real-tokai/grablinks