SSD speeds are nothing short of miraculous in my mind. I come from the old days of striping 16 HDDs together (at a minimum number) to get 1GB/s throughput. Depending on the chassis, that was 2 8-drive enclosures in the "desktop" version or the large 4RU enclosures with redundant PSUs and fans loud enough to overpower arena rock concerts. Now, we can get 5+GB/s throughput from a tiny stick that can be used externally via a single cable for data&power that is absolutely silent. I edit 4K+ video as well, and now can edit directly from the same device the camera recorded to during production. I'm skipping over the parts of still making backups, but there's no more multi-hour copy from source media to edit media during a DIT step. I've spent many a shoot as a DIT wishing the 1s&0s would travel across devices much faster while everyone else on the production has already left, so this is much appreciated by me. Oh, and those 16 device units only came close to 4TB around the time of me finally dropping spinning rust.
The first enclosure I ever dealt with was a 7-bay RAID-0 that could just barely handle AVR75 encoding from Avid. Just barely to the point that only video was saved to the array. The audio throughput would put it over the top, so audio was saved to a separate external drive.
Using SSD feels like a well deserved power up from those days.
The latency of modern NVMe is what really blows my mind (as low as 20~30 uS). NVMe is about an order of magnitude quicker than SAS and SATA.
This is why I always recommend developers try using SQLite on top of NVMe storage. The performance is incredible. I don't think you would see query times anywhere near 20uS with a hosted SQL solution, even if it's on the same machine using named pipes or other IPC mechanism.
Meanwhile a job recently told me they are on IBM AS400 boxes “because Postgress and other sql databases can’t keep up with the number of transactions we have”… for a company that has a few thousand inserts per day…
Obviously not true that they’d overwhelm modern databases but feels like that place has had the same opinions since the 1960s.
Then there's optane that got ~10us with. The newest controllers and nand is inching closer with randoms but optane is still the most miraculous ssd tech that's normally obtainable
Eventually we'll have machines with unified memory+storage. You'll certainly have to take a bit of a performance hit in certain scenarios but also think about the load time improvements. If you store video game files in the same format they'd be needed at runtime you could be at the main menu in under a second.
That would be possible even on spinning harddrives as long as they're already spinning.
The fastest memory can't prevent the real reason games take so long to the menu: company logos. The Xbox can already resume a closed game within a few seconds, loading a simple main menu is trivial in comparison.
At a minimum, we should be able to get everything to DRAM speeds. Beyond that you start to run into certain limitations. Achieving L1 latency is physically impossible if the storage element is more than a few inches away from the CPU.
The separation into RAM and external storage (floppy disks, magnetic tapes, hard drives and later SSD etc) is the sole consequence of technology not being advanced enough at the time to store all of the data in memory.
Virtual memory subsystems in operating systems of the last 40+ years pretty much do exactly that – they essentially emulate infinite RAM that spills over onto the external storage that backs it up.
Prosumer grade laptops are already easily available, and in 2-3 years there will be ones with 256-512 Gb as well, so… it is not entirely incoceivable that in 10-20 years (maybe more, maybe less) the Optane style memory is going to make a comeback and laptops/desktops will come with just memory, and the separation into RAM and external storage will finally cease to exist.
P.S. RAM has become so cheap and has reached such large capacity that the current generation of young engineers don't event know what a swap is, and why they might want to configure it.
I am also of the opinion that we are heading towards the convergence, although it is not very clear yet what the designs are going to converge on.
Pretty much every modern CPU is a hybrid design (either modified Harvard or von Neumann), and then there is SoC, as you have rightfully pointed out, which is usually modified Harvard, with heterogenuous computing, integrated SIMT (GPU), DSP's and various accelerators (e.g. NPU) all connected via high-speed interconnects. Apple has added unified memory, and there have rumours that with the advent M5 they are going to change how the memory chips are packaged (added to the SoC), which might (or might not) lay a path for the unification of RAM and storage in the future. It is going to be an interesting time.
It's not really the SSDs themselves that are incredibly fast (they still are somewhat), it's mostly the RAM cache and clever tricks to make TLC feel like SLC.
Most (cheap) SSDs their performance goes off a cliff once you hit the boundary of these tricks.
We're talking about devices capable of >2GB/s throughput, and acquiring footage <.5GB/s. No caching issues, but I'm not buying el cheapo SSDs either. These are all rated and approved by camera makers. It wasn't brought up because it's not an issue. For people that are wondering why camera makers approve or not particular recording media, this is part of the answer. But nobody was asking that particular question and instead the reply tried to rain on my parade.
Not sure what warranted such an aggressive response.
I grew up in the 90s, on 56kb modems and PCs that rumbled and whined when you booted them up. I was at the tail end of using floppies.
I never said I didn't love the speed of SSDs, and when they just started to become mainstream it was an upgrade I did for everyone around me. I made my comment in part because you mentioned dumping 4K into the SSD and/or editing it. It can be a nasty surprise if you're doing something live, and suddenly your throughout plummets, everything starts to stutter and you have no idea why.
Your tone is quite odd here. I'm having difficulty parsing your intention, but I'm going to assume you're being genuine because why not.
For the RAM cache, you hit the boundaries when you exhaust the RAM cache. It performs faster, but is smaller and once full, data has to be off/loaded at the rate of the slower backing NAND. It might not be RAM, either, sometimes faster SLC NAND is used for the cache.
It's not really possible to describe it much more concretely than that beyond what you've already been told, performance falls off a cliff when that happens. How long "it" takes, what the level of performance is before and after, it all depends on the device.
There are many more tricks that SSD manufacturers use, but caching is the only one I know of related to speed so I'll leave the rest in the capable hands of Google.
Two of the main ones actually aren't really DRAM related but how full the drive is.
For most (all?) SSD drives they need a good 20% of the drive free for garbage collection and wear levelling. Going over this means it can't do this "asynchronously" and instead has to do it as things are written, which really impacts speed.
Then on top of that on cheaper flash like TLC and QLC the drive can go much faster by having free space "pretend" to be SLC and write it in a very "inefficient" size wise but fast method (think a bit like striped RAID0, but instead of data reliability issues you get with that it only works when you have extra space available). Once it hits a certain threshold it can't pretend anymore as it uses too much space to write in an inefficient fast way and has to write it in the proper format.
These things are additive too so on cheaper flash things get very very slow. Learnt this the hard way some years ago when it would barely write out at 50% of HDD speeds.
In theory should be more efficient but in reality it's not. Any gains from 'modulating' efficiency are reduced by having to use very aggressive error correction and also write/read things multiple times (because the error rate is so high). I think QLC needs to do usually somewhere on the order of 8 "write/read" cycles for example to verify the data is written correctly.
From a lawyer, 'tell me something actionable' would mean 'cross the line and say something upon which I can sue you', so I sympathise with anyone finding it unsettling.
In context it's more likely a translation issue, perhaps from modern translation picking up a challenging tone and inadvertently turning it to 'challenged by a lawyer or cop' mode to maintain the tone.
My tone is a combination of genuine curiosity and moderate annoyance at a dismissive but unhelpful comment.
RootsComment: SSD speed is miraculous!
Jorvis: well ackshually is just RAM and tricks that run out
Me: your comment provides zero value
I am annoyed by well ackshually comments. I’d love to learn more about SDD performance. How is the ram filled? How bad is perf when you cache miss? What’s worse case perf? What usage patterns are good or bad? So many interesting questions.
Look at this Kioxia Excera drive[0]. It plummets from 6800Mb/s (850MB/s) all the way to 1600Mb/s (200MB/s).
Its not really a well ackshually comment, there's real pitfalls. Especially when doing 4K. RAW 4K is 12Gb/s and would fill 450GB within 5 minutes. ProRes 4444XQ within 10 minutes. ProRes4444 in 40 minutes.
Martinald his comment is right too. By being very inefficient and treating TLC (or even QLC) as single level and only writing one bit to a cell, much higher performance can be extracted. But once you hit the 80% full threshold, the drive starts to repack the last/least used data into multiple bits per cell.
A RAM cache and SLC cache can both speed access times up, act as a write buffer and mask the shuffling of bits, but there is a limit.
Lastly, its kind of ironic to see me portrayed as jaded when someone else is the one pouring out vitriol all over the thread. Ah well.
Right? I’m comparing my direct experience of enduring the pain of slower than Christmas HDDs to the incredible speeds of SSDs, and get a well actually it’s not SSDs that are fast blah blah. Look dude, I don’t care about your magic smoke that you’re so smart you know how the smoke is made. I just care that I can transfer data at blisteringly fast speeds. I couldn’t care less about QLC, SLC, or TLC because reading/writing at >2GB/s is all the tender loving care I need. Don’t rain on my parade because you’re jaded.
I haven’t had a spinning platter in my dev machine since I think 2008 or 2009. Even back then an SSD was the single biggest upgrade I’d seen the first 3D accelerator cards in the late 90s. (Oh god I’m old).
More recently we saw SSDs get added to video game consoles and load times are about 4x faster. And that’s with code/data optimized for a spinning plate not an SSD.
I know they aren’t actually magic. But they might as well be! I’d love to hear details on what weird conditions reduce their performance by 10x. That’d be cool and fun to know. Alas.
Toms Hardware usually includes a "Sustained Write Performance and Cache Recovery" test
The test measures the write cache speed and the time to the fall to the native NAND write speed. There are usually irregularities in the sustained write speeds as well.
The other test I've seen is based on writing and using up free space, SSD performance can drop off as the free space fills up and garbage collection efficiency goes down. I think this impacts random writes particularly
In the enterprise space, drives tend to keep more over provisioned NAND free to maintain more consistent performance. Very early on the SSD timeline, it was advisable to only allocate 80% of consumer drives if you were using them outside of desktops and expected the workload to fill them.
This hits home even more since I started restoring some vintage Macs.
For the ones new enough to get an SSD upgrade, it's night and day the difference (even a Power Mac G4 can feel fresh and fast just swapping out the drive). For older Macs like PowerBooks and classic Macs, there are so many SD/CF card to IDE/SCSI/etc. adapters now, they also get a significant boost.
But part of the nostalgia of sitting there listening to the rumble of the little hard drive is gone.
> But part of the nostalgia of sitting there listening to the rumble of the little hard drive is gone.
I remember this being a key troubleshooting step. Listen/feel for the hum of the hard drive OR the telltale click clack, grinding, etc that foretold doom.
I've just finished CF swapping a PowerBook 1400cs/117. It's a base model with 12MB RAM, so there are other bottlenecks, but OS 8.1 takes about 90 seconds from power to desktop and that's pretty good for a low-end machine with a fairly heavy OS.
Somehow the 750MB HDD from 1996 is still working, but I admit that the crunch and rumble of HDDs is a nostalgia I'm happy to leave in the past.
My 1.67 PowerBook G4 screams with a 256GB mSATA SSD-IDE adapter. Until you start compiling code or web surfing, it still feels like a pretty modern machine. I kind of wish I didn't try the same upgrade on a iBook G3, though...
>I kind of wish I didn't try the same upgrade on a iBook G3, though...
Oh god. Those were the worst things ever to upgrade the hard drive. Just reading this gave me a nightmare flashback to having to keep track of all the different screws. This is why my vintage G3 machine is a Pismo instead of an iBook.
Yeah this machine will probably never be the same. It does have an SSD now! But also a CD drive that isn't latching properly and the entire palmrest clicks the mouse button.
It doesn't help that I'm not a great laptop repair tech as is, but wow are those iBooks terrible. The AlBook was fine, and the Unibody MacBooks just a few years later had the HDD next to the battery under a tool-less latch.
I just picked up a 1.5GHz Powerbook G4 12-inch in mint condition. RAM is maxed out but I've been putting off the SSD-IDE upgrade because of how intrusive it is and many screws are involved.
I had a 2011 MBP that I kept running by replacing the HDD with an SSD, and then removed the DVD-ROM drive with a second SSD. The second SSD had throughput limits because it was designed for shiny round disc, so it had a lower ability chip. I had that until the 3rd GPU replacement died, and eventually switched to second gen butterfly keyboard. The only reason it was tolerable was because of the SSDs, oh and the RAM upgrades
Did you ever have the GPU issue? My sister had a 2011, I had to desolder a resistor (or maybe two?) on it to bypass the dGPU since it was causing it to boot loop. But now it's still running and pretty happily for some basic needs!
Yes, that's why it was on the 3rd repair. Apple knew they had issues and replaced it for me before by replacing the entire main board. Twice. The last time I took it in, they would no longer replace for free and wanted $800 for the repair. That was half the cost of modern laptop, so I chose no. I was unaware of being able to disable the GPU like that. I still have it on a shelf, but honestly, I don't see trying to do the hack now but might have considered back then.
While these are geared toward retrocomputing, there are things that attempt to simulate the sound based on activity LEDs: https://www.serdashop.com/HDDClicker
I'm running 12 of them for ZFS cache/log/special, and they are fast/tough enough to make a large array on a slow link feel fast. I shake my fist at Intel and Micron for taking away one of the best memory technologies to ever exist.
I have (stupidly) used a too small Samsung EVO drive as a caching drive, and that is probably the first computer part that I've worn out (bar a mouse & keyboard).
Totally. I spent a lot of time 15-20 years ago building out large email systems.
I recently bought a $17 SSD for my son’s middle school project that was speced to deliver like 3x what I needed in those days. From a storage perspective, I was probably spending $50 GB/mo all-in to deploy a multi million dollar storage solution. TBH… you’d probably smoke that system with used laptops today.
If you're interested in some hard data, Backblaze publishes their HD failure numbers[1]. These disks are storage optimized, not performance optimized like the parent comment, but they have a pretty large collection of various hard drives, and it's pretty interesting to see how reliability can vary dramatically across brand and model.
The Backblaze reports are impressive. It would have been very handy to know which models to buy. They break it down to capacity of the same family of drives so a 2TB might be sound, but the 4TB might be more flaky. That information is very useful when it comes time to think about upgrading capacity in the arrays. Having someone go through these battles and then give away the data learned would just be dumb to not take advantage of their generosity.
Can confirm. My 3TB Seagate was the only disk so far (knocking on wood) that died in a way that lost some data. Still managed to make a copy with dd_rescue, but there was a big region that just returned read errors and I ended up with a bunch of corrupt files. Thankfully nothing too important...
Depending on the HDD vendor/model. We had hot spares and cold spares. On one build, we had a bad batch of drives. We built the array on a Friday, and left it for burn-in running over the weekend. On Monday, we came in to a bunch of alarms and >50% failure rate. At least they died during the burn-in so no data loss, but it was an extreme example. That was across multiple 16-bay rack mount chassis. It was an infamous case though, we were not alone.
More typically, you'd have a drive die much less frequently, but it was something you absolutely had to be prepared for. With RAID-6 and a hot spare, you could be okay with a single drive failure. Theoretically, you could lose two, but it would be a very nervy day getting the array to rebuild without issue.
I asked because I did a makeshift NAS for myself with three 4tb ironwolf, but they died before the third year. I didn't investigate much, but it was most likely because of power outages and a lack of a nobreak PSU at that time. It's still quite a bit of work to maintain physical hard drives and the probability of failure as I understand tend to increase the more units the array has because of inverse probability (not the likelihood of one of them failing but the likelihood of none of them failing after a period of time, which is cumulative)
Any electronic gear that you care about must be connected to a UPS. HDDs are very susceptible to power issues. Good UPS are also line conditioners so you get a clean sine wave rather than whatever comes straight from the mains. If you've never seen it, connect a meter to an outlet in your home and what how much fluctuations you get throughout the day. Most people think about spikes/surges, while forgetting that dips and under-volting is damaging as well. Most equipment have a range of acceptable voltage, but you'd be amazed at the number of times mains will dip below that range. Obviously location will have an affect on quality of service, but I hear my UPSes kick in multiple times a week to cover a dip if only for a couple of seconds.
The fun thing about storage pools is that they can lull you into thinking they are set it and forget it. You have to monitor SMART messages. Most drives will give you a heads up if you know where to look. Having the fortitude to have a hot spare instead of just adding it to the storage pool goes a long way from losing data.
Are your drives under heavy load or primarily just spinning waiting for use? Are they dying unsuspectedly, or are you watching the SMART messages and being prepared when it happens?
They’re idle most of the time. Poweted on 24/7 though, and maybe a few hundred megabytes written every day, plus a few dozen gigabytes now and then. Mostly long-term storage. SMART has too much noise; I wait for zfs to kick it out of the pool before changing. With triple redundancy, never got close to data loss.
To be clear, I should have said replacing 2-3 disks per year.
That seems awfully high no? I've been running a 5 disk raidz2 pool (3TB disks) and haven't replaced a single drive in the last 6ish years. It's composed of only used/decommissioned drives from ebay. The manufactured date stamp on most of them says 2014.
I did have a period where I thought drives were failing but further investigation revealed that ZFS just didn't like the drives spinning down for power save and would mark them as failed. I don't remember the parameter but essentially just forced the drives to spin 24/7 instead of spinning down when idle and it's been fine ever since. My health monitoring script scrubs the array weekly.
The article speculates on why Apple integrates the SSD controller onto the SOC for their A and M series chips, but misses one big reason, data integrity.
About a decade and a half ago, Apple paid half a billion dollars to acquire the patents of a company making enterprise SSD controllers.
> Anobit appears to be applying a lot of signal processing techniques in addition to ECC to address the issue of NAND reliability and data retention. In its patents there are mentions of periodically refreshing cells whose voltages may have drifted, exploiting some of the behaviors of adjacent cells and generally trying to deal with the things that happen to NAND once it's been worn considerably.
Through all of these efforts, Anobit is promising significant improvements in NAND longevity and reliability.
> The article speculates on why Apple integrates the SSD controller onto the SOC for their A and M series chips, but misses one big reason, data integrity.
If they're really interested with data integrity they should add checksums to APFS.
If you don't have RAID you can't rebuild corrupted data, but at least you know there's a problem and perhaps restore from Time Machine.
For metadata, you may have multiple copies, so can use a known-good one (this is how ZFS works: some things have multiple copies 'inherently' because they're so important).
Edit:
> Apple File System uses checksums to ensure data integrity for metadata but not for the actual user data, relying instead on error-correcting code (ECC) mechanisms in the storage hardware.[18]
> If they're really interested with data integrity they should add checksums to APFS.
Or you can spend half a billion dollars to solve the issue in hardware.
As one of the creators of ZFS wrote when APFS was announced:
> Explicitly not checksumming user data is a little more interesting. The APFS engineers I talked to cited strong ECC protection within Apple storage devices. Both NAND flash SSDs and magnetic media HDDs use redundant data to detect and correct errors. The Apple engineers contend that Apple devices basically don't return bogus data.
> Or you can spend half a billion dollars to solve the issue in hardware.
And hope that your hardware/firmware doesn't ever get bugs.
Or you can do checksumming at the hardware layer and checksumming at the software/FS layer. Protection in depth.
ZFS has caught issues from hardware, like when LBA 123 is requested but LBA 456 is delivered: the hardware-level checksum for LBA 456 was fine, and so it was passed up the stack, but it wasn't actually the data that was asked for. See Bryan Cantrill's talk "Zebras All the way Down":
And if checksums are not needed for a particular use-case, make them toggleable: even ZFS has a set checksums=off option. My problem is not having the option at all.
When the vast majority of the devices you sell run on battery power, it makes far more sense from a battery life perspective to handle issues in hardware as much as possible.
For instance, try to find a processor aimed at mobile devices that doesn't handle video decoding in dedicated hardware instead of running it on a CPU core.
> given Apple designs their own CPUs they could add extensions for anything they need.
Indeed. They added an entire enterprise grade SSD controller.
> In its patents there are mentions of periodically refreshing cells whose voltages may have drifted, exploiting some of the behaviors of adjacent cells and generally trying to deal with the things that happen to NAND once it's been worn considerably.
Apple does not care about external storage at all, as in external disks. They offer iCloud for external storage. They don't sell external disks. They don't like cables. They make lots of money selling you a bigger internal disk.
That's my point, though, is that it seems weird to spend a half billion dollars just to solve the problem for an extremely common use case by saying "use OpenZFS".
Why not come up with a solution that covers external storage too, instead of spending all that money and relying on external solutions? I just don't understand why they couldn't have optional checksums in APFS.
To be fair, though, NTFS predates APFS by over 20 years.
Don't get me wrong, there's no reason Microsoft can't transition to another filesystem (like offering ReFS outside of Server or whatever Windows variants support it currently), but I don't understand why a company would transition to a new filesystem in 2016 and not include a data checksums option. Hell, ReFS predates APFS, and I think it even has optional data checksums.
To be fair, NTFS is still the default Windows 11 filesystem in 2025, and Microsoft still makes zero effort to insure file integrity when you use that default Windows filesystem.
Handling file integrity in hardware is a big step up.
> Handling file integrity in hardware is a big step up.
Is there any evidence that Apple actually has better hardware data integrity than anyone else, though? They make claims in the article linked a few posts back, but AFAIK SSDs in general make use of error correcting codes, not just Apple's SSDs.
That article also points out how even multi-million dollar arrays are known to return bad data, and previous Apple SSD devices have been known to do the same.
I agree that the state of default filesystems is bad, but I'm not convinced that Apple's hardware solution is anything more than them saying, "Trust me, bro."
ReFS is available on Windows 10/11 client; it's what the DevDrive feature uses. And the current Insider allows installation of the OS/boot volume on ReFS.
Default or not, are there sensible alternatives on a Mac? I'm not sure if I'd consider OpenZFS on Mac "sensible" - but I haven't owned a Mac in decades, so... what are the alternatives to APFS?
maybe apple doesn't want you to use external storage, because storage size is how apple upsells devices and grabs larger premium.
By using external storage, instead of paying $10k more for more storage, you are directly harming Apple’s margins and the CEO’s bonus which is not ok /s
That is a weak excuse to rely on data integrity in the hardware. They most likely had that feature and removed it so they wouldn't be liable for a class action lawsuit when it turns out the NAND ages out due to bug in the retention algorithm. NTFS is what, 35 years old at this point? Odd comparison.
I use zfs where I can (it has content checksums) but it sucks bad on macOS, so I wrote attrsum. It keeps the file content checksum
in an xattr (which APFS (and ext3/4) supports).
I use it to protect my photo library on a huge external SSD formatted with APFS (encrypted, natch) because I need to mount it on a mac laptop for Lightroom.
Note that this isn't too long after Apple abandoned efforts to bring ZFS into Mac OS X as a potential default filesystem. Patents were probably a good reason, given the Oracle buyout of Sun, but also a bit of "skating to where the puck will be" and realizing that the spinning rust ZFS was built for probably wasn't going to be in their computers for much longer.
> Patents were probably a good reason, given the Oracle buyout of Sun
There is no reason to speculate as the reason is know (as stated by Jeff Bonwick, one of the co-inventors of ZFS):
>> Apple can currently just take the ZFS CDDL code and incorporate it (like they did with DTrace), but it may be that they wanted a "private license" from Sun (with appropriate technical support and indemnification), and the two entities couldn't come to mutually agreeable terms.
> I cannot disclose details, but that is the essence of it.
> Apple File System uses checksums to ensure data integrity for metadata but not for the actual user data, relying instead on error-correcting code (ECC) mechanisms in the storage hardware.[18]
More evidence they thought hdds were on their way out was the unibody macbook keynote. They made a big deal about how the user can access their hdd from the latch on the bottom without any tools as they said ssd was on the horizon.
Do Apple SSDs have a much longer longevity and reliability? I've not looked at the specific patents nor am I an expert on signal processing but I've worked on SSD controllers and NAND manufacturers in the past and they had their own similar ideas as this.
From my experience working on Mac laptops, yeah. SSD failures are incredibly rare but on the flip side when they do go out repairs are very costly.
I know if my previous job at a large hard drive manufacturer we had special Apple drives that ran different parts and firmware than the regular PC drives. Their specs and tolerances where much different than the PC market at a whole.
Contrary to popular belief, you can run many different off-the-shelf brand NVMe drives on all of the NVMe-fitted Intel Macs. All you need is a passive adapter. My 2017 MacBook Air has a 250GB WD Blue SN570 in it.
And all of the Apple Silicon Macs have them soldered on or use a proprietary module. They make way too much from storage upgrades to include an M.2 slot.
The on-board storage of their MacBooks is as much, if not more, driven by their design vision. They didn't make these things record thin to get an excuse to use surface-mounted RAM and storage. I don't agree with it, but I understand it.
Sure, and I agree with that goal. In fact I would like NVMe controllers to simply not exist. The operating system should manage raw flash, using host algorithms that I can study in the the source code.
> How do you think it would be electrically connected to the CPU?
On the CPU's PCIe bus. NVMe drives are PCIe devices, designed specifically to facilitate such interfacing.
Edit: Pardon, misread the actual statement you responded to. Of course one shouldn't hook NAND directly to the CPU. I'll leave my response for whatever value the info has.
I'm with you, but.... no. At the level where the controller is operating, things are no longer digital. Capacity (as in farads, not bytes), voltage, crosstalk, debouncing, traces behaving like antennas, terminations, what have you. Analog values, temperature dependencies, RF interference. Stuff best dealt with custom logic placed as close as possible to it.
The physical interface controller can exist to that extent, of course. But I think the command interface it should present to the host system should be a physical one, not a logical translation. The host should be totally aware of the layout of the flash devices, and should command the things that the devices are actually capable of doing: erase this, write that, read this.
We already see the demand for this in the latest NVMe protocol spec that allows the host to give placement hints. But this is a half-measure that suggests what systems really want, which is not to vaguely influence the device but instead to tell it exactly what to do.
Interestingly, when M4 mac mini went on sale, version with 32GB RAM/1TB drive was priced exactly 2x as 16GB RAM / 512GB drive version. This kinda implies that Apple sells only RAM and storage, and gives away the rest for free.
It makes no sense for nonvolatile storage, where the power consumption, bandwidth limit, and latency of socketed interconnects are trivial, compared to the speed, latency, and power consumption of the drive itself.
For RAM, it's an entirely different ball game. The closer you can have it to the processor die, the higher the bandwidth, the lower the latency, and the lower the power consumption.
On the one hand, I get you. On the other, I’m just not sure we live in that world anymore for most people.
My daily driver is a base config 16” M1 MacBook Pro from 2021 and I have no inclinations to upgrade at all. Even the battery is still good.
I run CAD, compile and run large C++ projects. Do tons of heavy stuff in matlab. Various visualizations of simulations. My laptop just isn’t the slow thing anymore. I’m sure workloads exists that would push this machine but how many people are actually doing that. (Ok fine, chrome exists)
Even the smaller SSD isn’t an issue for me in practice because iCloud Drive and Box automatically move things I don’t often access off disk freeing local space.
Frankly if smaller memory footprints and smaller SSDs translates to lower base config prices and longer battery life it was the right choice, for me anyway.
There is someone on YT (Doctor Feng, or similar, though I can't find) who literally will have people ship him entry level iPhones/iPads/MBPs, etc, and he'll upgrade them to 4 and 8 TB SSDs. And create ASMR videos of the process.
Even with upgradable memory:
When I bought my "cheesegrater" Mac Pro, I wanted 8TB of SSD.
Except Apple wanted $3,000 for 7TB of SSD (considering the sticker price came with a baseline of 1TB).
I bought a 4xM.2 card and 4x2TB Samsung Pro SSDs, which cost me $1,300. However, I kept the 1TB "system" SSD, which was faster, at 6.8GBps versus the system drive's 5.5 GBps.
Similar to memory. OWC literally sells the same memory as Apple (same manufacturer, same specifications). Apple also wanted $3,000 for 160GB of memory (going from 32 to 192). I paid $1,000.
I bought one during their preorder period. The first SSD started to fail due to overheating. I just received and installed the replacement this week. Fingers crossed that it will be okay.
Important note: the seller provides no warranty for the SSDs. I was fortunate that they offered a 1-year warranty when I bought mine, but that is no longer the case now. $700 is a pretty big risk when there's no warranty.
FWIW, the non-Pro-compatible SSDs were overpriced initially as well, but they came down in price as they became more prevalent. Wait a few months, and we'll probably see the same with Pro-compatible SSDs.
> I was provided the $699 M4 Pro 4TB SSD upgrade by M4-SSD. It's quite expensive (especially compared to normal 4TB NVMe SSDs, which range from $200-400)...
Yes, that kind of culture is why while I appreciate many of Apple's technologies, I rather let customers or employers provide hardware if they feel so inclined for me to use Apple.
Ones you can replace the storage with a screwdriver are a safe risk, the soldering upgrades are a void warranty and something you are always best getting somebody who can do it in their sleep to do for you. Soldering is not as easy as it used to be, even the pros will not have 100% success and need to reflow or rework.
Or just go with a Beelink SER8 and toss Arch Linux + Wayland on it. you’ll save thousands of dollars and have a better user interface that’s way more customizable and efficient.
Mothers mac mini 2014? Slow as a dog, 30 second pauses, became unusable. Extremely tricky to reach the 5400rpm hard disc. Found a third party adaptor could bodge an nvme under the easily removable base flap. Suddenly transformed it to a fast nippy useable machine. (She paid up for the 8gb ram originally). But still rather annoyed that Apple essentially crippled their own product and it could only be fixed by chance. Wasn't a cheap pc...
Honestly the external option seems a lot better value for the money for almost all use cases. Something like half the cost. No tinkering with the internals of the very expensive thing. You can move it between computers, upgrade the stick in it, etc.
I'm sure there are cases where you really do care about speeds >3GB/s (and USB-4, the port on the mac, should max out at ~5 which is still marginally lower than the internal one). But I doubt they are common. It's hard to process most data in a meaningful way that fast.
Yep, you can pick up a 4TB usb-c SSD for a few hundred dollars/euros. Unless you are moving around 10s/100s of GB routinely, it's not going to be horribly slow. I had a 2TB Samsung taped to my imac for a few years when its fusion drive failed. The 64GB SSD still worked. So I used that for the OS. And the Samsung for the rest. I had Steam installed and X-plane and a few other things. Worked great. This was the 2015 5K imac. Eventually the motherboard died. I still have the SSD.
I've considered getting a mac mini with decently specced CPU/GPU and plenty of RAM and then just attaching a big SSD via thunderbolt. Probably a lot cheaper than maxing out the internal SSD and I don't think it will be that horrible. My main use case would be dealing with photos, maybe X-plane, and some videos. I might buy some games as well but it's not my core use case. It seems the Apple store is slowly filling up with a decent selection of ported games. I gladly pay the Apple tax to never deal with Windows again. I actually have a linux laptop running Steam. The hardware is just really crap and I keep longing for my macbook whenever I have to use it. Actually typing on this thing right now as I'm traveling and I left my work M4 Max mac book at home (it's a bit of a beast to lug around on vacation). The mini would probably be hooked up to a TV so I can watch stuff via Firefox and use a sane ad blocker and UI rather than dealing with whatever crap tastic shit comes with modern smart TVs.
So a reasonably beefy mac mini would basically be my entertainment center and double as a home PC with a ginormous 4K screen. I have considered getting some AMD equivalent with Arch Linux. Still on the fence about that. But either way, external USB-C for storage seems fine.
My only nit with that is that with external storage there’s definitely a race when it comes to mounting.
More than once I’ve had, say, Photos complain that it couldn’t find its library because I have apps relaunch on startup, my library has been moved to external storage, and the drive was not ready yet.
Also there’s no guarantee, at least naively, that what was /dev/disk4 on the last boot will be /dev/disk4 on this boot. Normally not necessarily an issue, but if you care about actual drive devices vs volume names, it can be an issue. (And there well be some low level config file wizardry to fix that issue, I just haven’t bothered to research it.)
on the boot stuff, can you not write the boot sequence (unsure of the exact terminology here) like you do on Linux? ie just get the hardware id of the device, and set which device that will correspond to (like /sd1 or whatever)?
I'm not a security researcher, but I get the distinct impression that Apple's hardware security is good enough that
if you actually had an evil-maid attack on the M4 Pro Mac mini, it would instantly become the hottest news in the security community.
I would not be so sure that Apple's hardware security is good enough, taking into account that for several years it has been possible to take complete control remotely and undetectably over any iPhone, because of a combination of hardware and software bugs.
The Apple Mx CPUs had some secret test registers that allowed the bypassing of all hardware memory protections and which could be accessed by those who were aware of their existence, because they were not disabled after production, as they should have been. Combined with some software bugs in some Apple system libraries, this allowed an attacker to obtain privileged execution rights by sending an invisible message to the iPhone.
It is unknown whether the same secret test registers were also open in the laptop versions of the Apple Mx CPUs. There the invisible message attack route would have been unavailable, but malicious Web pages might have been able to use the same exploit.
This incredible security failure has been hot news for a couple of weeks, together with the long list of CVEs associated with it, and it has been also discussed on HN, but after that it has been quickly forgotten. Now most people still think that the Apple devices have good security, despite their history showing otherwise. I do not think that any other hardware vendor except Apple has been caught with a security bug so dumb as those unprotected hardware test registers.
This was not a theoretical security failure, but it was discovered because some unknown attackers had used it for a long time to spy on some iPhone owners. The attack had been discovered by studying the logs of WiFi access points, which had shown an unusually high outbound traffic coming from the iPhones, which were exfiltrating the acquired data.
It wasn’t known by many and probably too valuable to burn so targets would be selective, when it was found it was patched along with virtually every iDevice.
You make it sound like this was a huge impact issue, it really wasn’t, theoretically everyone could be affected but in reality a negligible subset were.
I agree that few have been affected, but this has demonstrated gross incompetence of Apple, unless it had been an intentional backdoor, which would be even worse.
The fact that Apple keeps secret many technical details of their CPUs, like the existence of those hardware test registers, does not improve the security of their devices, but it weakens the security considerably.
Because of the Apple secrecy policy, the existence of the backdoor has been known and exploited by very few, but the same secrecy has enabled those few to spy on any interesting target for several years, without being discovered.
Had the test registers been documented, someone would have noticed quickly that they are accessible when they should not be, and the vulnerabilities would have been patched by Apple a few years earlier.
Hard disagree with the sentiment; the crown jewels of their hardware is the Secure Enclave and its software SEPOS which has benefitted significantly having its secrecy maintained. For decades now it has remained very much a blackbox that nobody has successfully attacked; and if it were attacked we would definitely know as its application of subverting it would be obvious.
As for the registers itself, I concede that information about those specifically could've been made available.
An "evil-maid attack" is the name used for the case when the attacker has unrestricted direct physical access for a short time to the computer, like it may be the case for the personnel who cleans the office or home in the absence of the computer users (or as it may be the case at some border control points if a laptop/smartphone is taken by the authorities for a checking done in another room, where the owner is not present).
With direct physical access, a lot of things can be done which cannot be done remotely, e.g. attempting to boot from an external device, possibly using hardware fault injections to bypass protections against that, attempting to read data that has not completely decayed from the DRAM modules, replacing some hardware component or inserting an extra component that would enable spying in the future, making copies of an encrypted SSD/HDD with the hope that after making other copies of it in the future that will enable breaking the encryption , if that is done using an encryption mode that does not protect against this kind of attack, and so on.
Same with the guy half-joking about Chinese malicious implants. Because of course China would never engage in espionage. It's interesting to note that he/she got downvoted a lot harder.
Ha! I’m sure I asked you about this before but I think you hinted at one point about being able to supply PD on a format other than disc and I think you said not on cassette. What was that?
I was quite pleased with the iBoff 2TB SSD I got for my M4 Mini. It's sad how badly Apple has some of us conditioned with the pathetic amounts of storage they include. I haven't had a Mac with more than 512GB of storage, basically, ever? And recently I was on my Mini, digging through some old backups, and hesitated as I normally would downloading a 40GB zip from my NAS, because "oh geeze this is 40GB plus another 40 after decompression, do I have enough space?" because 80GB is normally 15% of my Mac's storage space. Then I remembered, oh yeah, heaps of storage, this'll only cost me 4% of the total. I bought this Mac with the 256GB base SSD knowing I could upgrade, and nearly 40% of the drive was taken up out of the box.
It's pure robbery on Apple's part. Completely beyond the pale now. Their ridiculous RAM and storage prices were never that big of a deal back in the PowerBook/early Macbook Pro days, because you could always opt out if you were a tiny bit handy with a small screwdriver (my 2008 unibody lets me swap storage with *1* screw, swap a battery with zero!). Now? It's unforgivable. I don't care about soldered RAM, I get it, but it is despicable charging as much as the entire computer to upgrade the RAM a paltry 16GB.
There's profit, and there's actively making your entire product experience worse in pursuit of profit. Having to constantly hem and haw over oh god oh geeze do I have enough local storage for this basic task, having to juggle external storage and copying files back and forth (since plenty of their own shit doesn't work if its installed on an external SSD), or constantly deleting and redownloading larger apps, makes the product experience worse. Full stop. At the very least every Mac they sell should have 512GB, if not a TB, stock. I'm tired of acting like SSDs are some insanely expensive luxury like it's 2008 again.
The RAM has always been the biggest issue, for me. I'd almost always prefer to have my larger data on an external system. In my case an NAS or several RAID enclosures. Having the data "mobility" is important. My normal workflow is to have my active work on the system in question and then move it back forth as I finish or swap projects. In recent years, I have never maxed out my storage on my Macs. To be fair, I don't work with a bunch of 4K video editing, or other huge datasets, so maybe that's where it becomes more of a problem.
man, perspective here is quite funny to me. I just wrote a diatribe about SSD speeds vs my HDD experience in life. At $699 to have 5+GB/s throughput would make a younger me look at you like you had two heads and just walked out of UFO. There's no way it could be that fast/small/cheap in any future without alien tech. I get that Apple's pricing is higher than other options. Even still, it's dirt cheap compared for the performance that allows high-end to consumers.
Even still, I'm a huge fan of taking advantage of the cheaper options with an portable external chassis and a nice thunderbolt cable. While not quite as fast as the internal version, it's still 2+GB/s worth of speed that exceeds my needs/use.
So from my perspective, it's dirt cheap compared to your insanely expensive perspective
>taking advantage of the cheaper options with an portable external chassis and a nice thunderbolt cable.
This has a number of downsides on macOS. I am well aware of the cheapness of this, but you also get a worse user-experience. I have a huge NAS that I could connect to over 10GbE too, save for no native iSCSI drivers. I have a handful of external SSDs in enclosures, but I can't easily boot off of it (and if I do, certain features of the OS get disabled). I can't easily or reliably move my home folder to it. I can't clean up my desk without buying expensive external "docks" or something that in addition to a standard M.2 SSD, come out to more expensive than the iBoff upgrade. I have to waste my time juggling files back and forth from the external to the internal in situations where I either want to (for faster speeds) or need to (in cases where Apple's software refuses to work if its not on the internal SSD).
Yeah, 20 years ago the thought of 5GB/s for less than a grand was fantasy. It's not fantasy anymore, and it's not 20 years ago. I'm tired of pretending it is to justify these outrageous prices Apple is extracting from their customers.
There maybe some Stockholm Syndrome, but to be clear, I'd be much happier with cheaper anything too.
You're also acting like I'm suggesting running the OS from the external. That's just a weird way to think about it. The system drive is just that, for the system and apps and home folder. Media belongs on a different volume. Granted, I'm a media person with professional workflow mentality where the media is never small enough to fit on a system drive. Plus, "back in the day" the media drives were much faster than the system drive. So it's all turned up on its head in that regard
with what? the m series is everything on the chip. you're suggesting an Intel CPU an Nvidia GPU and a bunch of RAM sticks to be emulated to present itself to the OS as a single device?
Performance was very, very good in my experience. Benchmarks normally took a 10% hit vs their equivalents on windows, but being able to run macOS on arbitrary consumer hard made performance incredibly cheap. My first proper bang-for-buck machine was an i7-4790k with an R9 270x GPU, 16GB of RAM, and a combination of SSD and HDD storage. Total cost was around $1300 CAD if I remember correctly, which is absurdly cheap compared to what you’d have to pay at the time for a Mac with that performance. I also ran macOS on a 2x E5-2670 machine with 64GB of RAM, as well as a 2x E5-2697 v2 machine, and an i9-12900k machine with an RX 6950XT GPU, all of which were incredible value compared to an off-the-shelf Mac. It’s only recently that Macs are catching up to hackintoshes performance-to-dollar wise, because Apple Silicon is very, very good. Once I get my WRX90 workstation hackintoshed it should give the Mac Studio and Mac Pro a run for their money, but not for much longer if Apple drops support for x86 after macOS 16.
The Hackintoshes I've built were much better performance for price compared to equivalent official model. It just took a lot longer to get them up an running. We were building for production machines vs personal use, so things like Messages, AppStore, etc that could be tricky to get to work were just not something we cared about.
I ran Hackintoshes for many years. Performance on a $1500-2000 Intel platform was always extremely good (certainly better than any Mac I was willing to shell out for and sometimes better than any Mac that was sold).
That time period of the trash can mac saw a lot of people looking to have a useful computer and Hackintosh was the only way. We had systems with multiple GPUs that blew the doors off the trash can's AMD multiple year old GPUs. Then, when the new GPUs came out, Hackintoshes just upgraded while the trash can just sat their all sad in how useless it was.
The people involved in making the Hackintosh possible should be immortalized in stone carvings to be remembered for all of time.
> It's pure robbery on Apple's part. Completely beyond the pale now. Their ridiculous RAM and storage prices were never that big of a deal back in the PowerBook/early Macbook Pro days, because you could always opt out if you were a tiny bit handy with a small screwdriver (my 2008 unibody lets me swap storage with 1 screw, swap a battery with zero!). Now? It's unforgivable. I don't care about soldered RAM, I get it, but it is despicable charging as much as the entire computer to upgrade the RAM a paltry 16GB.
For what it's worth, I completely agree with you.
But.
I suspect that Apple isn't solely doing this for profit. Apple's pricing structure aggressively funnels people into the base config for each CPU.
Thinking about getting an M4 with upgraded ram? A base config M4 pro starts to look pretty good.
In practice, this means that Apple's logistics is dramatically simplified since 95% of people are ordering a small number of SKUs.
> There's profit, and there's actively making your entire product experience worse in pursuit of profit.
It was really egregious when the base config only came with 8 GB of ram. I'll admit that storage can be a bit tight depending on what you're trying to do, but at least external storage is an option, however ugly and/or inconvenient it may be for some.
Don't want to deal with the logistics of lots of SKUs? Don't sell them. Trying to upsell people is a money move. Selling a SKU where the 80+gb OS is like 40% of the disk is a good SKU to cut. Especially if some consumers are unlikely to realize how little space they will actually have.
> Don't want to deal with the logistics of lots of SKUs? Don't sell them. Trying to upsell people is a money move. Selling a SKU where the 80+gb OS is like 40% of the disk is a good SKU to cut.
This isn't a profitable move from Apple's perspective - they try to keep the base unit at about the same price across generations. That's what happened when they moved from 8 GB of ram to 16 GB.
Tangential, just based on a funny coincidence noticeable in the article: What do all these M’s stand for, anyway? I guess the M.2 might be inherited from the m in mSATA and mPCIe(?).
For Apple… they had A for for their cellphone chips, which vaguely made sense because they were the only chips Apple made at the time. But then, M for their laptop chips? M as in… mobile, or mini? But they use it in their Macs Pro, including their workstation-y ones…
This should even hold for non-US salaries? This is a machine that enables you to work for about four years. What’s that 200k € in /median/ EU wages. Penny pinching. The thing is that consumer and prosumers vary and everybody wants to drive a Porsche to work And to leisure.
Not blaming anyone for wanting a machine like this. Trying to point out that tech has become so accessible that we all aspire to have a supercomputer as our daily driver.
When I was young a PC (xt and on) would set my dad back about a monthly wage. What I see is a huge compression of the price range. But the upper part of the range still exists (training LLM is not much different from the central computer at universities in the 70s/80s).
I think an EU salary of 200k/year is at least uncommon if not outright rare and definitely not median. At least in the tech space, maybe in finance it's more common.
That makes far more sense in that case. Though I would have to clarify that 200k/year is attainable in terms of total compensation, just not salary per year on average.
> This is a machine that enables you to work for about four years. [...] Penny pinching.
But that wasn't the argument. In "your time is more valuable", the time is what it takes to remove a dozen screws, replace a card, and format it. Plus any increased risk of data loss, but that should also be quite small if it exists. So for saving hundreds of dollars or more, your expected time is like an hour if you have backups (you'd better have backups!), hard to say for sure if you don't.
Right...so $500 + the risk of having to spend hours to deal with problems rather than taking it to an Apple store and having them deal with it (assuming you live close enough to one).
Obviously, the tradeoffs are different for everyone.
Apple won't upgrade the storage for you aftermarket, as far as I'm aware. There's no tax you can pay them to take your current machine and bump the spec.
Frankly, this is exactly the sort of head-up-ass attitude that will end with Apple being smacked around by investigatory commissions like what happened to John Deere and Microsoft.
> Apple won't upgrade the storage for you aftermarket
Not only that, they won't repair devices with third-party hardware. If my Mini has an issue, I'll have to remove the new SSD and reinstall the OEM one before I drop it off. I experienced this when tried to get my 2012 MacBook Pro fixed (wet keyboard).
They did the replacement, but I learned how to do it myself, including replacing the keyboard again, another SSD upgrade, and eventually a battery upgrade.
You can replace the battery on an iPhone yourself these days. Apple's terrible design makes the process involve shipping specialized hardware to your joke, for which you need to hand over a good chunk of change in collateral to be able to use, but it can be done.
My suspicion for their shitty process is that it was set up purely so Apple can tell regulators "see, consumers can't be trusted to replace their own batteries, look what it takes", but they do offer a programme for it.
I was provided the $699 M4 Pro 4TB SSD upgrade by M4-SSD. It's quite expensive (especially compared to normal 4TB NVMe SSDs, which range from $200-400)
Depends what type of flash that's comparing. QLC is cheap, TLC a bit more expensive, MLC nearly unobtainable, and SLC insanely expensive unless you SLC-mod a QLC drive.
> Fix the cables in place. This can be very fiddly. It helps greatly to have a fine pointed set of tweezers to assist with placement, bending and the application of pressure whilst screw-down is underway. Take your time and try to get all the cable core under the screw or at least a fair amount.
If you do this mod, you should really use crimped ring connectors instead of just hooking the power cables around the screws. It greatly reduces the risk of pull-out since the screw retains the connector, which also means less chance of shorts and a much easier install. Also since the terminals are uniform and flat, you get much more even clamping. I would also add heat shrink over the crimp.
I don't have a Mini so can't comment on the right size to buy, but you can buy ring terminals in practically any diameter for next to nothing:
They probably use them in production as test jig connects for passing power. They are vertical inter-board rails. When making physical connections for high current contacts it pays to have a larger surface area in case there is a poor connection as substantial draw may occur for short periods. Also, such surfaces may degrade over time, so extra surface area is desirable.
The linked repo has a pretty good rundown of possible reasons:
> If non-square screens on Macbook Pros make your blood boil with rage
> If you can't afford or don't want to pay for a Macbook Pro (smart choice)
> If you have ergonomics concerns with shrinking laptops and one size fits all keyboards
> If you like your systems to be repairable and modular rather than comprised of proprietary parts shoehorned in to a closed source design available only from a single vendor for a limited time
> If you are blind (and don't want to carry a screen around)
> If you want to use AR instead of a screen and therefore prefer to be untethered
> If you are on a sailing ship, submarine, mobile home, campervan, paraglider, recumbent touring bicycle, or otherwise off-grid
> If you want a capable unix system to power a mobile mechatronic system
I'd add in not having to deal with a Macbook in clamshell mode doing stupid crap like forcing you to double-tap the touchID button sometimes, refusing to connect to external keyboards and mice on wake, and some of the other annoyances I have dealt with.
Also, a Mac Mini is small, and a MacBook is not, at least as a function of "desk area" vs "area consumed".
> If non-square screens on Macbook Pros make your blood boil with rage
> If you are blind (and don't want to carry a screen around)
> If you want to use AR instead of a screen and therefore prefer to be untethered
You can just remove the screen! My M1 Air works just fine at least. (I’ve broken the screen, but if you just don’t need screen at all, you can sell the top half assembly and save some money.)
MB Air ($1100+ / >1.8x) is only available 15" which is IMHO too small for long term comfortable use.
MB Pro ($2300+ / >3.8x) is 16" which is IMHO still a bad ergonomic experience. I'd sooner buy a mini, trash it, buy another one 4 times. Especially given they are improving annually.
1. cheaper 2. different form factor 3. more choice of battery/kb/mouse/screen/camera 4. not landfill when you have to replace battery/kb/mouse/screen/camera 5. doesn't have an annoying chunk out of the screen 6. doesn't have a video camera pointed at you all the time 7. keyboard that suits large hands 8. keyboard in preferred layout 9. not subject to apple tax on most components/upgrades
$1200 for 4TB upgrade is so ridiculous. Manufacturers holding RAM for ransom is very annoying. Esp when the lowest setting isn't even meant to be purchased, and the specs are so low they will underperform, or be obsolete in a few years.
This is kind of why people start cloning macs in the 90s. They were too expensive straight from the factory.
It’s wild to see how much Apple invests in making these as hostile to the user to upgrade. But also cool to see people out there with the skills to desolder the chips, memory, and storage and replace with a much faster alternative.
If Apple truly cared about their carbon footprint, devices would be easily serviceable and upgradeable by user
You mean the company that had several generations of those terrible laptop designs that made you rip out the whole chassis when your keyboard became unusable after dust got into the keys?
New in box after having been stored in a warehouse for twenty years maybe. Apple isn't any greener than any of their competitors.
Comparing the speeds of a new flash device and an old, used one will typically not be valid unless steps are taken to condition the new device into a steady operating state.
Putting the new one through an equal amount of use that the old one saw, because SSD controller firmware is unpredictable and many SSDs see reduced performance with time.
So you pay $700 for an SSD that otherwise retails for $200 and then do an "unauthorized" modification of your own computer and void the warranty to install it, but that's still preferable because it otherwise costs $1200 directly from Apple. The Apple tax is really something else.
> an "unauthorized" modification of your own computer and void the warranty to install it
Citation needed. This modification doesn't look to me at all like it'd void the warranty unless you damage the machine while you do the installation.
If you need to make a warranty claim, you should of course reinstall the factory one before you do so, since the vendor doesn't expect users to replace that and won't have any practices of looking/removing so they can return it to you if you take your machine in for service with a non-Apple card there.
But voiding your warranty for this has been roundly rejected, in the US at least, as long as you don't damage your equipment by doing it.
I completely quit buying Apple devices all togehter, but I still occasionally check their website. The SSD upgrade prices are ridiculous and funny, especially since I keep meeting people that are convinced that Apples SSDs are somehow magically better than my 60 EUR Samsung M.2 and the price is hence justified.
The upgrade prices
- 13" MacBook Air: 256GB to 512GB -> 256GB for 250 EUR
- 14" MacBook Pro: 512GB to 1TB -> 512GB for 250 EUR
So the Air upgrade is twice the price for what is - as far as I was able to figure out - the same hardware?
Double check the SoCs. The base model MacBook Air has a slower GPU than the base model MacBook Pro and the upgraded MacBook Air. When shopping for Apple products, you need to compare every number on the spec page individually because Apple is scared of model numbers.
For a while, some MacBooks also had slower disks because some capacities used one NAND chip while others used two. I believe they stopped doing that for their latest models, though. That kind of fuckery means you need to look up benchmarks for each individual model, because the performance differences aren't clear from the product description.
SSD speeds are nothing short of miraculous in my mind. I come from the old days of striping 16 HDDs together (at a minimum number) to get 1GB/s throughput. Depending on the chassis, that was 2 8-drive enclosures in the "desktop" version or the large 4RU enclosures with redundant PSUs and fans loud enough to overpower arena rock concerts. Now, we can get 5+GB/s throughput from a tiny stick that can be used externally via a single cable for data&power that is absolutely silent. I edit 4K+ video as well, and now can edit directly from the same device the camera recorded to during production. I'm skipping over the parts of still making backups, but there's no more multi-hour copy from source media to edit media during a DIT step. I've spent many a shoot as a DIT wishing the 1s&0s would travel across devices much faster while everyone else on the production has already left, so this is much appreciated by me. Oh, and those 16 device units only came close to 4TB around the time of me finally dropping spinning rust.
The first enclosure I ever dealt with was a 7-bay RAID-0 that could just barely handle AVR75 encoding from Avid. Just barely to the point that only video was saved to the array. The audio throughput would put it over the top, so audio was saved to a separate external drive.
Using SSD feels like a well deserved power up from those days.
The latency of modern NVMe is what really blows my mind (as low as 20~30 uS). NVMe is about an order of magnitude quicker than SAS and SATA.
This is why I always recommend developers try using SQLite on top of NVMe storage. The performance is incredible. I don't think you would see query times anywhere near 20uS with a hosted SQL solution, even if it's on the same machine using named pipes or other IPC mechanism.
Meanwhile a job recently told me they are on IBM AS400 boxes “because Postgress and other sql databases can’t keep up with the number of transactions we have”… for a company that has a few thousand inserts per day…
Obviously not true that they’d overwhelm modern databases but feels like that place has had the same opinions since the 1960s.
Then there's optane that got ~10us with. The newest controllers and nand is inching closer with randoms but optane is still the most miraculous ssd tech that's normally obtainable
But they've retired it??
Optane is no longer available :(
EBay!
Eventually we'll have machines with unified memory+storage. You'll certainly have to take a bit of a performance hit in certain scenarios but also think about the load time improvements. If you store video game files in the same format they'd be needed at runtime you could be at the main menu in under a second.
> you could be at the main menu in under a second
That would be possible even on spinning harddrives as long as they're already spinning.
The fastest memory can't prevent the real reason games take so long to the menu: company logos. The Xbox can already resume a closed game within a few seconds, loading a simple main menu is trivial in comparison.
At a minimum, we should be able to get everything to DRAM speeds. Beyond that you start to run into certain limitations. Achieving L1 latency is physically impossible if the storage element is more than a few inches away from the CPU.
Most new motherboards do already have the highest throughput M.2 connector very near the CPU.
The most recent desktop I built has it situated directly below the standard formfactor x16 PCI slot.
I think part of the reason why it's so close is also for signal integrity reasons.
Far more so than latency.
Also: routing PCIe lanes is a pain. Being able to take 4 pairs and terminate them makes routing everything else simple
The separation into RAM and external storage (floppy disks, magnetic tapes, hard drives and later SSD etc) is the sole consequence of technology not being advanced enough at the time to store all of the data in memory.
Virtual memory subsystems in operating systems of the last 40+ years pretty much do exactly that – they essentially emulate infinite RAM that spills over onto the external storage that backs it up.
Prosumer grade laptops are already easily available, and in 2-3 years there will be ones with 256-512 Gb as well, so… it is not entirely incoceivable that in 10-20 years (maybe more, maybe less) the Optane style memory is going to make a comeback and laptops/desktops will come with just memory, and the separation into RAM and external storage will finally cease to exist.
P.S. RAM has become so cheap and has reached such large capacity that the current generation of young engineers don't event know what a swap is, and why they might want to configure it.
I have a feeling (and it's just a feeling) that many SoC-style chips of the future will abandon Von Neumann Architecture entirely.
It's not that much of a stretch to imagine ultra dense wafers that can have compute, storage, and memory all in one SoC.
First, unify compute and memory. Then, later, unify those two with persistent storage so that we have something like RAM = VRAM = Storage.
I don't think this is around the corner, but certainly possible in about 12 years.
I am also of the opinion that we are heading towards the convergence, although it is not very clear yet what the designs are going to converge on.
Pretty much every modern CPU is a hybrid design (either modified Harvard or von Neumann), and then there is SoC, as you have rightfully pointed out, which is usually modified Harvard, with heterogenuous computing, integrated SIMT (GPU), DSP's and various accelerators (e.g. NPU) all connected via high-speed interconnects. Apple has added unified memory, and there have rumours that with the advent M5 they are going to change how the memory chips are packaged (added to the SoC), which might (or might not) lay a path for the unification of RAM and storage in the future. It is going to be an interesting time.
I’m still buying old Optane drives for that latency when it matters. RockDB loves it
>Optane drives
This is the secret hack a lot of people don't know about!
It's not really the SSDs themselves that are incredibly fast (they still are somewhat), it's mostly the RAM cache and clever tricks to make TLC feel like SLC.
Most (cheap) SSDs their performance goes off a cliff once you hit the boundary of these tricks.
[flagged]
It can be good to know that SSDs are fast until you exhaust the cache by copying gigs of files around.
It doesn’t hurt to be aware of the limitations even if for the common case things are much better.
We're talking about devices capable of >2GB/s throughput, and acquiring footage <.5GB/s. No caching issues, but I'm not buying el cheapo SSDs either. These are all rated and approved by camera makers. It wasn't brought up because it's not an issue. For people that are wondering why camera makers approve or not particular recording media, this is part of the answer. But nobody was asking that particular question and instead the reply tried to rain on my parade.
Not sure what warranted such an aggressive response.
I grew up in the 90s, on 56kb modems and PCs that rumbled and whined when you booted them up. I was at the tail end of using floppies.
I never said I didn't love the speed of SSDs, and when they just started to become mainstream it was an upgrade I did for everyone around me. I made my comment in part because you mentioned dumping 4K into the SSD and/or editing it. It can be a nasty surprise if you're doing something live, and suddenly your throughout plummets, everything starts to stutter and you have no idea why.
It wasn’t. IMHO the best feature and only real reason I hang around HN is exactly because I get to interact with others who know more.
The magic is in knowing how the trick works.
A bit further down the comment chain, martinald has a great succinct explanation:
https://news.ycombinator.com/item?id=44538489
> once you hit the boundary of these trick
Tell me more. When do I hit the boundary? What is perf before/after said boundary? What are the tricks?
Tell me something actionable. Educate me.
Your tone is quite odd here. I'm having difficulty parsing your intention, but I'm going to assume you're being genuine because why not.
For the RAM cache, you hit the boundaries when you exhaust the RAM cache. It performs faster, but is smaller and once full, data has to be off/loaded at the rate of the slower backing NAND. It might not be RAM, either, sometimes faster SLC NAND is used for the cache.
It's not really possible to describe it much more concretely than that beyond what you've already been told, performance falls off a cliff when that happens. How long "it" takes, what the level of performance is before and after, it all depends on the device.
There are many more tricks that SSD manufacturers use, but caching is the only one I know of related to speed so I'll leave the rest in the capable hands of Google.
Two of the main ones actually aren't really DRAM related but how full the drive is.
For most (all?) SSD drives they need a good 20% of the drive free for garbage collection and wear levelling. Going over this means it can't do this "asynchronously" and instead has to do it as things are written, which really impacts speed.
Then on top of that on cheaper flash like TLC and QLC the drive can go much faster by having free space "pretend" to be SLC and write it in a very "inefficient" size wise but fast method (think a bit like striped RAID0, but instead of data reliability issues you get with that it only works when you have extra space available). Once it hits a certain threshold it can't pretend anymore as it uses too much space to write in an inefficient fast way and has to write it in the proper format.
These things are additive too so on cheaper flash things get very very slow. Learnt this the hard way some years ago when it would barely write out at 50% of HDD speeds.
> cheaper flash like TLC and QLC the drive can go much faster by having free space "pretend" to be SLC
I'm afraid I don't understand how exactly this makes it faster. In my head, storing fewer bits per write operation should decrease write bandwidth.
Of course we observe the opposite all the time, with SLC flash being the fastest of all.
Does it take significantly more time to store the electrical charge for any given 1-4 bits with the precision required when using M/T/QLC encoding?
In theory should be more efficient but in reality it's not. Any gains from 'modulating' efficiency are reduced by having to use very aggressive error correction and also write/read things multiple times (because the error rate is so high). I think QLC needs to do usually somewhere on the order of 8 "write/read" cycles for example to verify the data is written correctly.
Tone's hard to read in text, but I read the tone as "eagerly excited to learn more". I am also interested and appreciate your comment here.
From a lawyer, 'tell me something actionable' would mean 'cross the line and say something upon which I can sue you', so I sympathise with anyone finding it unsettling.
In context it's more likely a translation issue, perhaps from modern translation picking up a challenging tone and inadvertently turning it to 'challenged by a lawyer or cop' mode to maintain the tone.
My tone is a combination of genuine curiosity and moderate annoyance at a dismissive but unhelpful comment.
RootsComment: SSD speed is miraculous! Jorvis: well ackshually is just RAM and tricks that run out Me: your comment provides zero value
I am annoyed by well ackshually comments. I’d love to learn more about SDD performance. How is the ram filled? How bad is perf when you cache miss? What’s worse case perf? What usage patterns are good or bad? So many interesting questions.
Look at this Kioxia Excera drive[0]. It plummets from 6800Mb/s (850MB/s) all the way to 1600Mb/s (200MB/s).
Its not really a well ackshually comment, there's real pitfalls. Especially when doing 4K. RAW 4K is 12Gb/s and would fill 450GB within 5 minutes. ProRes 4444XQ within 10 minutes. ProRes4444 in 40 minutes.
Martinald his comment is right too. By being very inefficient and treating TLC (or even QLC) as single level and only writing one bit to a cell, much higher performance can be extracted. But once you hit the 80% full threshold, the drive starts to repack the last/least used data into multiple bits per cell.
A RAM cache and SLC cache can both speed access times up, act as a write buffer and mask the shuffling of bits, but there is a limit.
Lastly, its kind of ironic to see me portrayed as jaded when someone else is the one pouring out vitriol all over the thread. Ah well.
[0]https://tweakers.net/reviews/12310/wd-sn5000-4tb-ssd-hoe-sne...
> this Kioxia Excera drive[0]. It plummets from 6800Mb/s (850MB/s) all the way to 1600Mb/s (200MB/s).
This is interesting. Thanks!
> Its not really a well ackshually comment, there's real pitfalls
I don’t doubt the existence of pitfalls. But the lack of specificity was quite irritating!
Right? I’m comparing my direct experience of enduring the pain of slower than Christmas HDDs to the incredible speeds of SSDs, and get a well actually it’s not SSDs that are fast blah blah. Look dude, I don’t care about your magic smoke that you’re so smart you know how the smoke is made. I just care that I can transfer data at blisteringly fast speeds. I couldn’t care less about QLC, SLC, or TLC because reading/writing at >2GB/s is all the tender loving care I need. Don’t rain on my parade because you’re jaded.
I haven’t had a spinning platter in my dev machine since I think 2008 or 2009. Even back then an SSD was the single biggest upgrade I’d seen the first 3D accelerator cards in the late 90s. (Oh god I’m old).
More recently we saw SSDs get added to video game consoles and load times are about 4x faster. And that’s with code/data optimized for a spinning plate not an SSD.
I know they aren’t actually magic. But they might as well be! I’d love to hear details on what weird conditions reduce their performance by 10x. That’d be cool and fun to know. Alas.
Toms Hardware usually includes a "Sustained Write Performance and Cache Recovery" test
The test measures the write cache speed and the time to the fall to the native NAND write speed. There are usually irregularities in the sustained write speeds as well.
https://www.tomshardware.com/reviews/wd-black-sn850x-ssd-rev...
The other test I've seen is based on writing and using up free space, SSD performance can drop off as the free space fills up and garbage collection efficiency goes down. I think this impacts random writes particularly
In the enterprise space, drives tend to keep more over provisioned NAND free to maintain more consistent performance. Very early on the SSD timeline, it was advisable to only allocate 80% of consumer drives if you were using them outside of desktops and expected the workload to fill them.
This hits home even more since I started restoring some vintage Macs.
For the ones new enough to get an SSD upgrade, it's night and day the difference (even a Power Mac G4 can feel fresh and fast just swapping out the drive). For older Macs like PowerBooks and classic Macs, there are so many SD/CF card to IDE/SCSI/etc. adapters now, they also get a significant boost.
But part of the nostalgia of sitting there listening to the rumble of the little hard drive is gone.
> But part of the nostalgia of sitting there listening to the rumble of the little hard drive is gone.
I remember this being a key troubleshooting step. Listen/feel for the hum of the hard drive OR the telltale click clack, grinding, etc that foretold doom.
Thank the gawds we no longer have to worry about the click of death
Now it's just a silent glitch of death.
I've just finished CF swapping a PowerBook 1400cs/117. It's a base model with 12MB RAM, so there are other bottlenecks, but OS 8.1 takes about 90 seconds from power to desktop and that's pretty good for a low-end machine with a fairly heavy OS.
Somehow the 750MB HDD from 1996 is still working, but I admit that the crunch and rumble of HDDs is a nostalgia I'm happy to leave in the past.
My 1.67 PowerBook G4 screams with a 256GB mSATA SSD-IDE adapter. Until you start compiling code or web surfing, it still feels like a pretty modern machine. I kind of wish I didn't try the same upgrade on a iBook G3, though...
>I kind of wish I didn't try the same upgrade on a iBook G3, though...
Oh god. Those were the worst things ever to upgrade the hard drive. Just reading this gave me a nightmare flashback to having to keep track of all the different screws. This is why my vintage G3 machine is a Pismo instead of an iBook.
Yeah this machine will probably never be the same. It does have an SSD now! But also a CD drive that isn't latching properly and the entire palmrest clicks the mouse button.
It doesn't help that I'm not a great laptop repair tech as is, but wow are those iBooks terrible. The AlBook was fine, and the Unibody MacBooks just a few years later had the HDD next to the battery under a tool-less latch.
I just picked up a 1.5GHz Powerbook G4 12-inch in mint condition. RAM is maxed out but I've been putting off the SSD-IDE upgrade because of how intrusive it is and many screws are involved.
I had a 2011 MBP that I kept running by replacing the HDD with an SSD, and then removed the DVD-ROM drive with a second SSD. The second SSD had throughput limits because it was designed for shiny round disc, so it had a lower ability chip. I had that until the 3rd GPU replacement died, and eventually switched to second gen butterfly keyboard. The only reason it was tolerable was because of the SSDs, oh and the RAM upgrades
Did you ever have the GPU issue? My sister had a 2011, I had to desolder a resistor (or maybe two?) on it to bypass the dGPU since it was causing it to boot loop. But now it's still running and pretty happily for some basic needs!
Yes, that's why it was on the 3rd repair. Apple knew they had issues and replaced it for me before by replacing the entire main board. Twice. The last time I took it in, they would no longer replace for free and wanted $800 for the repair. That was half the cost of modern laptop, so I chose no. I was unaware of being able to disable the GPU like that. I still have it on a shelf, but honestly, I don't see trying to do the hack now but might have considered back then.
I posted how I did it on my blog: https://www.jeffgeerling.com/blog/2017/fixing-2011-macbook-p...
Might still be worth doing for someone into older computers, I've considered putting a few of my old computers on the free pile at a VCF!
hmm, it is a Friday with nothing planned for the weekend...
While these are geared toward retrocomputing, there are things that attempt to simulate the sound based on activity LEDs: https://www.serdashop.com/HDDClicker
> For older Macs like PowerBooks and classic Macs, there are so many SD/CF card to IDE/SCSI/etc.
Would those be bandwidth limited by the adapter/card or CPU? Can you get throughput higher than say, a cheap 2.5" SSD over Sata 3/4?
You are limited at first by the IDE/SCSI interface, so below SATA speeds.
Oh I must have misread the comment initially as PCIE/SCSI
You should try now-discontinued Intel Optane especially p5800x. I got my OS running on them and they are incredible.
I'm running 12 of them for ZFS cache/log/special, and they are fast/tough enough to make a large array on a slow link feel fast. I shake my fist at Intel and Micron for taking away one of the best memory technologies to ever exist.
The endurance on those drivers is amazing.
I have (stupidly) used a too small Samsung EVO drive as a caching drive, and that is probably the first computer part that I've worn out (bar a mouse & keyboard).
Just a few more years until we get MRAM as viable storage technology. And affordable fusion, and hovercars.
Totally. I spent a lot of time 15-20 years ago building out large email systems.
I recently bought a $17 SSD for my son’s middle school project that was speced to deliver like 3x what I needed in those days. From a storage perspective, I was probably spending $50 GB/mo all-in to deploy a multi million dollar storage solution. TBH… you’d probably smoke that system with used laptops today.
> I come from the old days of striping 16 HDDs together (at a minimum number) to get 1GB/s throughput
Woah, how long would that last before you'd start having to replace the drives?
If you're interested in some hard data, Backblaze publishes their HD failure numbers[1]. These disks are storage optimized, not performance optimized like the parent comment, but they have a pretty large collection of various hard drives, and it's pretty interesting to see how reliability can vary dramatically across brand and model.
---
1. https://www.backblaze.com/cloud-storage/resources/hard-drive...
The Backblaze reports are impressive. It would have been very handy to know which models to buy. They break it down to capacity of the same family of drives so a 2TB might be sound, but the 4TB might be more flaky. That information is very useful when it comes time to think about upgrading capacity in the arrays. Having someone go through these battles and then give away the data learned would just be dumb to not take advantage of their generosity.
Many years ago I felt like I dodged a bullet splurging for the 4TB Seagates instead of the 3TB Seagates I needed.
Can confirm. My 3TB Seagate was the only disk so far (knocking on wood) that died in a way that lost some data. Still managed to make a copy with dd_rescue, but there was a big region that just returned read errors and I ended up with a bunch of corrupt files. Thankfully nothing too important...
Depending on the HDD vendor/model. We had hot spares and cold spares. On one build, we had a bad batch of drives. We built the array on a Friday, and left it for burn-in running over the weekend. On Monday, we came in to a bunch of alarms and >50% failure rate. At least they died during the burn-in so no data loss, but it was an extreme example. That was across multiple 16-bay rack mount chassis. It was an infamous case though, we were not alone.
More typically, you'd have a drive die much less frequently, but it was something you absolutely had to be prepared for. With RAID-6 and a hot spare, you could be okay with a single drive failure. Theoretically, you could lose two, but it would be a very nervy day getting the array to rebuild without issue.
I asked because I did a makeshift NAS for myself with three 4tb ironwolf, but they died before the third year. I didn't investigate much, but it was most likely because of power outages and a lack of a nobreak PSU at that time. It's still quite a bit of work to maintain physical hard drives and the probability of failure as I understand tend to increase the more units the array has because of inverse probability (not the likelihood of one of them failing but the likelihood of none of them failing after a period of time, which is cumulative)
Any electronic gear that you care about must be connected to a UPS. HDDs are very susceptible to power issues. Good UPS are also line conditioners so you get a clean sine wave rather than whatever comes straight from the mains. If you've never seen it, connect a meter to an outlet in your home and what how much fluctuations you get throughout the day. Most people think about spikes/surges, while forgetting that dips and under-volting is damaging as well. Most equipment have a range of acceptable voltage, but you'd be amazed at the number of times mains will dip below that range. Obviously location will have an affect on quality of service, but I hear my UPSes kick in multiple times a week to cover a dip if only for a couple of seconds.
The fun thing about storage pools is that they can lull you into thinking they are set it and forget it. You have to monitor SMART messages. Most drives will give you a heads up if you know where to look. Having the fortitude to have a hot spare instead of just adding it to the storage pool goes a long way from losing data.
I run 24x RAID at home. I’m replacing disks 2-3 times per year.
Are your drives under heavy load or primarily just spinning waiting for use? Are they dying unsuspectedly, or are you watching the SMART messages and being prepared when it happens?
They’re idle most of the time. Poweted on 24/7 though, and maybe a few hundred megabytes written every day, plus a few dozen gigabytes now and then. Mostly long-term storage. SMART has too much noise; I wait for zfs to kick it out of the pool before changing. With triple redundancy, never got close to data loss.
To be clear, I should have said replacing 2-3 disks per year.
That seems awfully high no? I've been running a 5 disk raidz2 pool (3TB disks) and haven't replaced a single drive in the last 6ish years. It's composed of only used/decommissioned drives from ebay. The manufactured date stamp on most of them says 2014.
I did have a period where I thought drives were failing but further investigation revealed that ZFS just didn't like the drives spinning down for power save and would mark them as failed. I don't remember the parameter but essentially just forced the drives to spin 24/7 instead of spinning down when idle and it's been fine ever since. My health monitoring script scrubs the array weekly.
In a similar vein, I had Western Digital Raptors striped in my gaming PC in the mid 2000s. I remember just how amazed I was after moving to SSDs.
So, now someone can strip several SSDs to gain more performance as well for other purposes than video editing, right?
[dead]
The article speculates on why Apple integrates the SSD controller onto the SOC for their A and M series chips, but misses one big reason, data integrity.
About a decade and a half ago, Apple paid half a billion dollars to acquire the patents of a company making enterprise SSD controllers.
> Anobit appears to be applying a lot of signal processing techniques in addition to ECC to address the issue of NAND reliability and data retention. In its patents there are mentions of periodically refreshing cells whose voltages may have drifted, exploiting some of the behaviors of adjacent cells and generally trying to deal with the things that happen to NAND once it's been worn considerably.
Through all of these efforts, Anobit is promising significant improvements in NAND longevity and reliability.
https://www.anandtech.com/show/5258/apple-acquires-anobit-br...
> The article speculates on why Apple integrates the SSD controller onto the SOC for their A and M series chips, but misses one big reason, data integrity.
If they're really interested with data integrity they should add checksums to APFS.
If you don't have RAID you can't rebuild corrupted data, but at least you know there's a problem and perhaps restore from Time Machine.
For metadata, you may have multiple copies, so can use a known-good one (this is how ZFS works: some things have multiple copies 'inherently' because they're so important).
Edit:
> Apple File System uses checksums to ensure data integrity for metadata but not for the actual user data, relying instead on error-correcting code (ECC) mechanisms in the storage hardware.[18]
* https://en.wikipedia.org/wiki/Apple_File_System#Data_integri...
> If they're really interested with data integrity they should add checksums to APFS.
Or you can spend half a billion dollars to solve the issue in hardware.
As one of the creators of ZFS wrote when APFS was announced:
> Explicitly not checksumming user data is a little more interesting. The APFS engineers I talked to cited strong ECC protection within Apple storage devices. Both NAND flash SSDs and magnetic media HDDs use redundant data to detect and correct errors. The Apple engineers contend that Apple devices basically don't return bogus data.
https://arstechnica.com/gadgets/2016/06/a-zfs-developers-ana...
APFS keeps redundant copies and checksums for metadata, but doesn't constantly checksum files looking for changes any more than NTFS does.
> Or you can spend half a billion dollars to solve the issue in hardware.
And hope that your hardware/firmware doesn't ever get bugs.
Or you can do checksumming at the hardware layer and checksumming at the software/FS layer. Protection in depth.
ZFS has caught issues from hardware, like when LBA 123 is requested but LBA 456 is delivered: the hardware-level checksum for LBA 456 was fine, and so it was passed up the stack, but it wasn't actually the data that was asked for. See Bryan Cantrill's talk "Zebras All the way Down":
* https://www.youtube.com/watch?v=fE2KDzZaxvE
And if checksums are not needed for a particular use-case, make them toggleable: even ZFS has a set checksums=off option. My problem is not having the option at all.
When the vast majority of the devices you sell run on battery power, it makes far more sense from a battery life perspective to handle issues in hardware as much as possible.
For instance, try to find a processor aimed at mobile devices that doesn't handle video decoding in dedicated hardware instead of running it on a CPU core.
> […] handle issues in hardware as much as possible.
1. There is hardware support for (e.g.) SHA in ARM:
* https://developer.arm.com/documentation/ddi0514/g/introducti...
But given Apple designs their own CPUs they could add extensions for anything they need. Or use a simpler algorithm, like Fletcher (which ZFS uses):
* https://en.wikipedia.org/wiki/Fletcher%27s_checksum
2. It does not have to be enabled by default for every device. The main problem is the lack of it even as an option.
I wouldn't necessarily use ZFS checksums on a laptop, but ZFS has them for when I use it on a not-laptop.
> given Apple designs their own CPUs they could add extensions for anything they need.
Indeed. They added an entire enterprise grade SSD controller.
> In its patents there are mentions of periodically refreshing cells whose voltages may have drifted, exploiting some of the behaviors of adjacent cells and generally trying to deal with the things that happen to NAND once it's been worn considerably.
Good point.
That solution doesn't help anyone who's using external storage, though, so it kinda feels like a half billion dollars spent on a limited solution.
Apple does not care about external storage at all, as in external disks. They offer iCloud for external storage. They don't sell external disks. They don't like cables. They make lots of money selling you a bigger internal disk.
Their store has a whole section dedicated to storage, most of it external. https://www.apple.com/shop/mac/accessories/storage
There is nothing preventing you from running OpenZFS on external storage if you are worried that the hardware you purchased is less reliable.
That's my point, though, is that it seems weird to spend a half billion dollars just to solve the problem for an extremely common use case by saying "use OpenZFS".
Why not come up with a solution that covers external storage too, instead of spending all that money and relying on external solutions? I just don't understand why they couldn't have optional checksums in APFS.
It's far more weird that NTFS still makes zero effort to maintain file integrity on any level, on internal or external disks.
ReFS exists, so Microsoft knew they needed to do something, but they have utterly failed to protect the vast majority of users.
To be fair, though, NTFS predates APFS by over 20 years.
Don't get me wrong, there's no reason Microsoft can't transition to another filesystem (like offering ReFS outside of Server or whatever Windows variants support it currently), but I don't understand why a company would transition to a new filesystem in 2016 and not include a data checksums option. Hell, ReFS predates APFS, and I think it even has optional data checksums.
To be fair, NTFS is still the default Windows 11 filesystem in 2025, and Microsoft still makes zero effort to insure file integrity when you use that default Windows filesystem.
Handling file integrity in hardware is a big step up.
> Handling file integrity in hardware is a big step up.
Is there any evidence that Apple actually has better hardware data integrity than anyone else, though? They make claims in the article linked a few posts back, but AFAIK SSDs in general make use of error correcting codes, not just Apple's SSDs.
That article also points out how even multi-million dollar arrays are known to return bad data, and previous Apple SSD devices have been known to do the same.
I agree that the state of default filesystems is bad, but I'm not convinced that Apple's hardware solution is anything more than them saying, "Trust me, bro."
ReFS is available on Windows 10/11 client; it's what the DevDrive feature uses. And the current Insider allows installation of the OS/boot volume on ReFS.
https://forums.guru3d.com/threads/testing-instaling-windows-...
Every time I tried OpenZFS on my iMac, it absolutely crushed the performance of the entire machine.
We’re talking “watch the mouse crawl across the machine” crushed. Completely useless. Life returned to normal when I uninstalled it.
Also, I’ve heard anecdotes that ZFS and USB do not get along.
I’ve also heard contrary experiences. Some folks, somewhere, may be having success with ZFS on external drives on an iMac.
I’m just not one of them.
No one requires you to use APFS for your external storage!
And yet it's the default when formatting a device on macOS.
Being afraid to not use the default is evidence of not being a power user!
Default or not, are there sensible alternatives on a Mac? I'm not sure if I'd consider OpenZFS on Mac "sensible" - but I haven't owned a Mac in decades, so... what are the alternatives to APFS?
Occasionally that is mistaken by newbies as such, oftentimes it is the voice of experience and battlefield scars driving sensible decisions.
maybe apple doesn't want you to use external storage, because storage size is how apple upsells devices and grabs larger premium.
By using external storage, instead of paying $10k more for more storage, you are directly harming Apple’s margins and the CEO’s bonus which is not ok /s
Externally connected devices are not sexy, and Apple is concerned about image and looking sexy.
That is a weak excuse to rely on data integrity in the hardware. They most likely had that feature and removed it so they wouldn't be liable for a class action lawsuit when it turns out the NAND ages out due to bug in the retention algorithm. NTFS is what, 35 years old at this point? Odd comparison.
The point is that NTFS makes zero effort to maintain file integrity at any level.
Handling file integrity at the hardware level is a big step up.
Believing that giant companies are monolithic “theys” leads to all sorts of fallacies.
Odds are very good that totally different people work on the architecture of AFS and SoC design.
Even still, those people report to people that report to people until you eventually get to the person in charge of the full product.
You can do this yourself in userspace if you really want it:
https://git.eeqj.de/sneak/attrsum
I use zfs where I can (it has content checksums) but it sucks bad on macOS, so I wrote attrsum. It keeps the file content checksum in an xattr (which APFS (and ext3/4) supports).
I use it to protect my photo library on a huge external SSD formatted with APFS (encrypted, natch) because I need to mount it on a mac laptop for Lightroom.
A similar alternative is Howard Oakley’s Dintch/Fintch/cintch:
https://eclecticlight.co/dintch/
Worth noting, for ZFS - you can use the "copies" property of the dataset to save 2 or (usually) 3 separate copies of your data to the drive(s).
Note that this isn't too long after Apple abandoned efforts to bring ZFS into Mac OS X as a potential default filesystem. Patents were probably a good reason, given the Oracle buyout of Sun, but also a bit of "skating to where the puck will be" and realizing that the spinning rust ZFS was built for probably wasn't going to be in their computers for much longer.
> Patents were probably a good reason, given the Oracle buyout of Sun
There is no reason to speculate as the reason is know (as stated by Jeff Bonwick, one of the co-inventors of ZFS):
>> Apple can currently just take the ZFS CDDL code and incorporate it (like they did with DTrace), but it may be that they wanted a "private license" from Sun (with appropriate technical support and indemnification), and the two entities couldn't come to mutually agreeable terms.
> I cannot disclose details, but that is the essence of it.
* https://archive.is/http://mail.opensolaris.org/pipermail/zfs...
* https://web.archive.org/web/*/http://mail.opensolaris.org/pi...
How is it known when the quote you gave says "may be" which implies it's not known and is speculation on their part as well?
The reply to the "it may be..." message is from a @sun.com email address, and confirms "that is the essence of it".
Fixed links for the message and reply:
https://web.archive.org/web/20091028/http://mail.opensolaris...
https://web.archive.org/web/20091028/http://mail.opensolaris...
"I cannot disclose details, but that is the essence of it." — Jeff Bonwick of Sun, co-inventor of ZFS
When Apple announced the creation of APFS they mentioned that their intent was to handle data integrity at the hardware level.
See:
> Apple File System uses checksums to ensure data integrity for metadata but not for the actual user data, relying instead on error-correcting code (ECC) mechanisms in the storage hardware.[18]
* https://en.wikipedia.org/wiki/Apple_File_System#Data_integri...
More evidence they thought hdds were on their way out was the unibody macbook keynote. They made a big deal about how the user can access their hdd from the latch on the bottom without any tools as they said ssd was on the horizon.
Not just durability. Performance too. Apple has a much better SSD controller that is vertically integrated into the stack.
Do Apple SSDs have a much longer longevity and reliability? I've not looked at the specific patents nor am I an expert on signal processing but I've worked on SSD controllers and NAND manufacturers in the past and they had their own similar ideas as this.
From my experience working on Mac laptops, yeah. SSD failures are incredibly rare but on the flip side when they do go out repairs are very costly.
I know if my previous job at a large hard drive manufacturer we had special Apple drives that ran different parts and firmware than the regular PC drives. Their specs and tolerances where much different than the PC market at a whole.
Main reason was capturing 100% of storage upsell/upgrade money. They did same thing with RAM.
> Through all of these efforts, Anobit is promising significant improvements in NAND longevity and reliability.
Every flash controller does this. Modern NAND is just math on a stick. Lots and lots of math.
Presumably Apple want to be able to guarantee the quality of such logic.
Still sucks that you can’t use standard parts.
Contrary to popular belief, you can run many different off-the-shelf brand NVMe drives on all of the NVMe-fitted Intel Macs. All you need is a passive adapter. My 2017 MacBook Air has a 250GB WD Blue SN570 in it.
https://forums.macrumors.com/threads/upgrading-2013-2015-mac...
Not all Intel Macs. Only those without T2 chip, as T2 acts as the storage controller.
And all of the Apple Silicon Macs have them soldered on or use a proprietary module. They make way too much from storage upgrades to include an M.2 slot.
The on-board storage of their MacBooks is as much, if not more, driven by their design vision. They didn't make these things record thin to get an excuse to use surface-mounted RAM and storage. I don't agree with it, but I understand it.
Sure, and I agree with that goal. In fact I would like NVMe controllers to simply not exist. The operating system should manage raw flash, using host algorithms that I can study in the the source code.
How do you think it would be electrically connected to the CPU?
Same thing with DDR5: the electrical layer is a beast, it's a reason enough to require its own controller.
> How do you think it would be electrically connected to the CPU?
On the CPU's PCIe bus. NVMe drives are PCIe devices, designed specifically to facilitate such interfacing.
Edit: Pardon, misread the actual statement you responded to. Of course one shouldn't hook NAND directly to the CPU. I'll leave my response for whatever value the info has.
I'm with you, but.... no. At the level where the controller is operating, things are no longer digital. Capacity (as in farads, not bytes), voltage, crosstalk, debouncing, traces behaving like antennas, terminations, what have you. Analog values, temperature dependencies, RF interference. Stuff best dealt with custom logic placed as close as possible to it.
The physical interface controller can exist to that extent, of course. But I think the command interface it should present to the host system should be a physical one, not a logical translation. The host should be totally aware of the layout of the flash devices, and should command the things that the devices are actually capable of doing: erase this, write that, read this.
We already see the demand for this in the latest NVMe protocol spec that allows the host to give placement hints. But this is a half-measure that suggests what systems really want, which is not to vaguely influence the device but instead to tell it exactly what to do.
Interesting… I see no reason to be downvoted.
Non upgradeable storage and ram is ridiculous.
Interestingly, when M4 mac mini went on sale, version with 32GB RAM/1TB drive was priced exactly 2x as 16GB RAM / 512GB drive version. This kinda implies that Apple sells only RAM and storage, and gives away the rest for free.
It makes no sense for nonvolatile storage, where the power consumption, bandwidth limit, and latency of socketed interconnects are trivial, compared to the speed, latency, and power consumption of the drive itself.
For RAM, it's an entirely different ball game. The closer you can have it to the processor die, the higher the bandwidth, the lower the latency, and the lower the power consumption.
On the one hand, I get you. On the other, I’m just not sure we live in that world anymore for most people.
My daily driver is a base config 16” M1 MacBook Pro from 2021 and I have no inclinations to upgrade at all. Even the battery is still good.
I run CAD, compile and run large C++ projects. Do tons of heavy stuff in matlab. Various visualizations of simulations. My laptop just isn’t the slow thing anymore. I’m sure workloads exists that would push this machine but how many people are actually doing that. (Ok fine, chrome exists)
Even the smaller SSD isn’t an issue for me in practice because iCloud Drive and Box automatically move things I don’t often access off disk freeing local space.
Frankly if smaller memory footprints and smaller SSDs translates to lower base config prices and longer battery life it was the right choice, for me anyway.
There is someone on YT (Doctor Feng, or similar, though I can't find) who literally will have people ship him entry level iPhones/iPads/MBPs, etc, and he'll upgrade them to 4 and 8 TB SSDs. And create ASMR videos of the process.
Even with upgradable memory:
When I bought my "cheesegrater" Mac Pro, I wanted 8TB of SSD.
Except Apple wanted $3,000 for 7TB of SSD (considering the sticker price came with a baseline of 1TB).
I bought a 4xM.2 card and 4x2TB Samsung Pro SSDs, which cost me $1,300. However, I kept the 1TB "system" SSD, which was faster, at 6.8GBps versus the system drive's 5.5 GBps.
Similar to memory. OWC literally sells the same memory as Apple (same manufacturer, same specifications). Apple also wanted $3,000 for 160GB of memory (going from 32 to 192). I paid $1,000.
> Doctor Feng, or similar, though I can't find
DirectorFeng: https://www.youtube.com/channel/UCbzzMQ1mNKjAaDwbELsVYcQ
I bought one during their preorder period. The first SSD started to fail due to overheating. I just received and installed the replacement this week. Fingers crossed that it will be okay.
Important note: the seller provides no warranty for the SSDs. I was fortunate that they offered a 1-year warranty when I bought mine, but that is no longer the case now. $700 is a pretty big risk when there's no warranty.
FWIW, the non-Pro-compatible SSDs were overpriced initially as well, but they came down in price as they became more prevalent. Wait a few months, and we'll probably see the same with Pro-compatible SSDs.
700$ for 4TB! Getting robbed in broad daylight and writing a happy blogpost about it
> I was provided the $699 M4 Pro 4TB SSD upgrade by M4-SSD. It's quite expensive (especially compared to normal 4TB NVMe SSDs, which range from $200-400)...
Yes, that kind of culture is why while I appreciate many of Apple's technologies, I rather let customers or employers provide hardware if they feel so inclined for me to use Apple.
Privately it is all about Linux/Windows/Android.
Very good insights,
https://en.m.wikipedia.org/wiki/The_Cult_of_Mac
Getting robbed less is better than getting robbed more.
True. But I don't know if I’d be gleeful if the robber left me the credit cards and took the cash.
Looks like you also have to do the upgrade yourself (so it’s not all just cash money being forked over).
Also surely voids AppleCare, if you have it.
[flagged]
Oh brother
Custom low volume hardware is going to be more expensive.
Behold the power of the Reality Distortion Field. Apple marketing is insane.
Ones you can replace the storage with a screwdriver are a safe risk, the soldering upgrades are a void warranty and something you are always best getting somebody who can do it in their sleep to do for you. Soldering is not as easy as it used to be, even the pros will not have 100% success and need to reflow or rework.
I'm not usually one to respond with snark, but I'm sorry, your comment reads like what I imagine a stroke feels like. Can you please rephrase?
I did the same a few weeks ago (https://blog.notmyhostna.me/posts/ssd-upgrade-for-mac-mini-m...), usually I’m far into the “just pay for convenience” camp but this was so easy to do and the price difference was just too big.
Saving $500 for 30 min of actual work that’s also easily reversible if needed for a support case is too good to ignore.
Or just go with a Beelink SER8 and toss Arch Linux + Wayland on it. you’ll save thousands of dollars and have a better user interface that’s way more customizable and efficient.
Mothers mac mini 2014? Slow as a dog, 30 second pauses, became unusable. Extremely tricky to reach the 5400rpm hard disc. Found a third party adaptor could bodge an nvme under the easily removable base flap. Suddenly transformed it to a fast nippy useable machine. (She paid up for the 8gb ram originally). But still rather annoyed that Apple essentially crippled their own product and it could only be fixed by chance. Wasn't a cheap pc...
Honestly the external option seems a lot better value for the money for almost all use cases. Something like half the cost. No tinkering with the internals of the very expensive thing. You can move it between computers, upgrade the stick in it, etc.
I'm sure there are cases where you really do care about speeds >3GB/s (and USB-4, the port on the mac, should max out at ~5 which is still marginally lower than the internal one). But I doubt they are common. It's hard to process most data in a meaningful way that fast.
Yep, you can pick up a 4TB usb-c SSD for a few hundred dollars/euros. Unless you are moving around 10s/100s of GB routinely, it's not going to be horribly slow. I had a 2TB Samsung taped to my imac for a few years when its fusion drive failed. The 64GB SSD still worked. So I used that for the OS. And the Samsung for the rest. I had Steam installed and X-plane and a few other things. Worked great. This was the 2015 5K imac. Eventually the motherboard died. I still have the SSD.
I've considered getting a mac mini with decently specced CPU/GPU and plenty of RAM and then just attaching a big SSD via thunderbolt. Probably a lot cheaper than maxing out the internal SSD and I don't think it will be that horrible. My main use case would be dealing with photos, maybe X-plane, and some videos. I might buy some games as well but it's not my core use case. It seems the Apple store is slowly filling up with a decent selection of ported games. I gladly pay the Apple tax to never deal with Windows again. I actually have a linux laptop running Steam. The hardware is just really crap and I keep longing for my macbook whenever I have to use it. Actually typing on this thing right now as I'm traveling and I left my work M4 Max mac book at home (it's a bit of a beast to lug around on vacation). The mini would probably be hooked up to a TV so I can watch stuff via Firefox and use a sane ad blocker and UI rather than dealing with whatever crap tastic shit comes with modern smart TVs.
So a reasonably beefy mac mini would basically be my entertainment center and double as a home PC with a ginormous 4K screen. I have considered getting some AMD equivalent with Arch Linux. Still on the fence about that. But either way, external USB-C for storage seems fine.
My only nit with that is that with external storage there’s definitely a race when it comes to mounting.
More than once I’ve had, say, Photos complain that it couldn’t find its library because I have apps relaunch on startup, my library has been moved to external storage, and the drive was not ready yet.
Also there’s no guarantee, at least naively, that what was /dev/disk4 on the last boot will be /dev/disk4 on this boot. Normally not necessarily an issue, but if you care about actual drive devices vs volume names, it can be an issue. (And there well be some low level config file wizardry to fix that issue, I just haven’t bothered to research it.)
on the boot stuff, can you not write the boot sequence (unsure of the exact terminology here) like you do on Linux? ie just get the hardware id of the device, and set which device that will correspond to (like /sd1 or whatever)?
There are also shops in China that will upgrade the SSD in a mac mini for cheaper and they will do all the work of the DFU restore etc.
But Jeff doesn't live in China.
Could such a place exist in the US or would Apple shut it down?
I think https://dosdude1.com/ provides such services? And his YouTube channel is a great place to start if you’re looking to DIY this: https://www.youtube.com/user/dosdude1
I think it could since a lot of shade tree iphone repair shops exist. Probably not enough demand to pay for the overhead unlike in china though.
And when the machine arrives back in the States, it even has a fresh CPC ROM soldered onto the back of the SOC!
I'm not a security researcher, but I get the distinct impression that Apple's hardware security is good enough that if you actually had an evil-maid attack on the M4 Pro Mac mini, it would instantly become the hottest news in the security community.
I would not be so sure that Apple's hardware security is good enough, taking into account that for several years it has been possible to take complete control remotely and undetectably over any iPhone, because of a combination of hardware and software bugs.
The Apple Mx CPUs had some secret test registers that allowed the bypassing of all hardware memory protections and which could be accessed by those who were aware of their existence, because they were not disabled after production, as they should have been. Combined with some software bugs in some Apple system libraries, this allowed an attacker to obtain privileged execution rights by sending an invisible message to the iPhone.
It is unknown whether the same secret test registers were also open in the laptop versions of the Apple Mx CPUs. There the invisible message attack route would have been unavailable, but malicious Web pages might have been able to use the same exploit.
This incredible security failure has been hot news for a couple of weeks, together with the long list of CVEs associated with it, and it has been also discussed on HN, but after that it has been quickly forgotten. Now most people still think that the Apple devices have good security, despite their history showing otherwise. I do not think that any other hardware vendor except Apple has been caught with a security bug so dumb as those unprotected hardware test registers.
This was not a theoretical security failure, but it was discovered because some unknown attackers had used it for a long time to spy on some iPhone owners. The attack had been discovered by studying the logs of WiFi access points, which had shown an unusually high outbound traffic coming from the iPhones, which were exfiltrating the acquired data.
It wasn’t known by many and probably too valuable to burn so targets would be selective, when it was found it was patched along with virtually every iDevice.
You make it sound like this was a huge impact issue, it really wasn’t, theoretically everyone could be affected but in reality a negligible subset were.
I agree that few have been affected, but this has demonstrated gross incompetence of Apple, unless it had been an intentional backdoor, which would be even worse.
The fact that Apple keeps secret many technical details of their CPUs, like the existence of those hardware test registers, does not improve the security of their devices, but it weakens the security considerably.
Because of the Apple secrecy policy, the existence of the backdoor has been known and exploited by very few, but the same secrecy has enabled those few to spy on any interesting target for several years, without being discovered.
Had the test registers been documented, someone would have noticed quickly that they are accessible when they should not be, and the vulnerabilities would have been patched by Apple a few years earlier.
Hard disagree with the sentiment; the crown jewels of their hardware is the Secure Enclave and its software SEPOS which has benefitted significantly having its secrecy maintained. For decades now it has remained very much a blackbox that nobody has successfully attacked; and if it were attacked we would definitely know as its application of subverting it would be obvious.
As for the registers itself, I concede that information about those specifically could've been made available.
It is mindboggling simple to override Apple MDM and device enrollment for MBPs. In a manner that is one and done, survives upgrades etc.
Two minutes or less, 4 DNS entries.
I'm a lot less convinced than you are of the hardiness of Apple's security.
I wrote "Apple's hardware security".
To be fair, the parent comment did qualify their uncertainty four whole times:
1) "I'm not a security researcher" (ethos; repeal to authority)
2) "I get" (pathos; personal opinion)
3) "distinct impression" (pathos; emotional appeal)
4) "good enough" (logos; implies security is immeasurable/infeasible to prove)
Now, I wouldn't get caught dead endorsing a company that I have to write so many excuses for. But they did warn you!
Umm… what’s an evil-maid attack? Sounds like a b-horror film. :)
An "evil-maid attack" is the name used for the case when the attacker has unrestricted direct physical access for a short time to the computer, like it may be the case for the personnel who cleans the office or home in the absence of the computer users (or as it may be the case at some border control points if a laptop/smartphone is taken by the authorities for a checking done in another room, where the owner is not present).
With direct physical access, a lot of things can be done which cannot be done remotely, e.g. attempting to boot from an external device, possibly using hardware fault injections to bypass protections against that, attempting to read data that has not completely decayed from the DRAM modules, replacing some hardware component or inserting an extra component that would enable spying in the future, making copies of an encrypted SSD/HDD with the hope that after making other copies of it in the future that will enable breaking the encryption , if that is done using an encryption mode that does not protect against this kind of attack, and so on.
Do you actually believe this?
Not at all.
Don't be rude, your NSA ROM gets lonely sometimes.
I feel that you're being downvoted by people who don't know history.
I'll add some references:
https://arstechnica.com/tech-policy/2014/05/photos-of-an-nsa...
https://en.wikipedia.org/wiki/2010s_global_surveillance_disc...
Same with the guy half-joking about Chinese malicious implants. Because of course China would never engage in espionage. It's interesting to note that he/she got downvoted a lot harder.
They know, they just wish it weren't true.
"You only know me as you see me, not as I actually am"
CPC?
Possibly ParaDOS?
Ha! I’m sure I asked you about this before but I think you hinted at one point about being able to supply PD on a format other than disc and I think you said not on cassette. What was that?
Communist party of china aka ccp
I was quite pleased with the iBoff 2TB SSD I got for my M4 Mini. It's sad how badly Apple has some of us conditioned with the pathetic amounts of storage they include. I haven't had a Mac with more than 512GB of storage, basically, ever? And recently I was on my Mini, digging through some old backups, and hesitated as I normally would downloading a 40GB zip from my NAS, because "oh geeze this is 40GB plus another 40 after decompression, do I have enough space?" because 80GB is normally 15% of my Mac's storage space. Then I remembered, oh yeah, heaps of storage, this'll only cost me 4% of the total. I bought this Mac with the 256GB base SSD knowing I could upgrade, and nearly 40% of the drive was taken up out of the box.
It's pure robbery on Apple's part. Completely beyond the pale now. Their ridiculous RAM and storage prices were never that big of a deal back in the PowerBook/early Macbook Pro days, because you could always opt out if you were a tiny bit handy with a small screwdriver (my 2008 unibody lets me swap storage with *1* screw, swap a battery with zero!). Now? It's unforgivable. I don't care about soldered RAM, I get it, but it is despicable charging as much as the entire computer to upgrade the RAM a paltry 16GB.
There's profit, and there's actively making your entire product experience worse in pursuit of profit. Having to constantly hem and haw over oh god oh geeze do I have enough local storage for this basic task, having to juggle external storage and copying files back and forth (since plenty of their own shit doesn't work if its installed on an external SSD), or constantly deleting and redownloading larger apps, makes the product experience worse. Full stop. At the very least every Mac they sell should have 512GB, if not a TB, stock. I'm tired of acting like SSDs are some insanely expensive luxury like it's 2008 again.
The RAM has always been the biggest issue, for me. I'd almost always prefer to have my larger data on an external system. In my case an NAS or several RAID enclosures. Having the data "mobility" is important. My normal workflow is to have my active work on the system in question and then move it back forth as I finish or swap projects. In recent years, I have never maxed out my storage on my Macs. To be fair, I don't work with a bunch of 4K video editing, or other huge datasets, so maybe that's where it becomes more of a problem.
man, perspective here is quite funny to me. I just wrote a diatribe about SSD speeds vs my HDD experience in life. At $699 to have 5+GB/s throughput would make a younger me look at you like you had two heads and just walked out of UFO. There's no way it could be that fast/small/cheap in any future without alien tech. I get that Apple's pricing is higher than other options. Even still, it's dirt cheap compared for the performance that allows high-end to consumers.
Even still, I'm a huge fan of taking advantage of the cheaper options with an portable external chassis and a nice thunderbolt cable. While not quite as fast as the internal version, it's still 2+GB/s worth of speed that exceeds my needs/use.
So from my perspective, it's dirt cheap compared to your insanely expensive perspective
>taking advantage of the cheaper options with an portable external chassis and a nice thunderbolt cable.
This has a number of downsides on macOS. I am well aware of the cheapness of this, but you also get a worse user-experience. I have a huge NAS that I could connect to over 10GbE too, save for no native iSCSI drivers. I have a handful of external SSDs in enclosures, but I can't easily boot off of it (and if I do, certain features of the OS get disabled). I can't easily or reliably move my home folder to it. I can't clean up my desk without buying expensive external "docks" or something that in addition to a standard M.2 SSD, come out to more expensive than the iBoff upgrade. I have to waste my time juggling files back and forth from the external to the internal in situations where I either want to (for faster speeds) or need to (in cases where Apple's software refuses to work if its not on the internal SSD).
Yeah, 20 years ago the thought of 5GB/s for less than a grand was fantasy. It's not fantasy anymore, and it's not 20 years ago. I'm tired of pretending it is to justify these outrageous prices Apple is extracting from their customers.
There maybe some Stockholm Syndrome, but to be clear, I'd be much happier with cheaper anything too.
You're also acting like I'm suggesting running the OS from the external. That's just a weird way to think about it. The system drive is just that, for the system and apps and home folder. Media belongs on a different volume. Granted, I'm a media person with professional workflow mentality where the media is never small enough to fit on a system drive. Plus, "back in the day" the media drives were much faster than the system drive. So it's all turned up on its head in that regard
cant you just install macOs on your own hardware or are they typically Apple in that department as well?
Are you familiar with Hackintosh? That's what people did with Intel based platforms. Apple Silicon put an end to the Hackintosh.
Source? Last year I installed macOS 14 on a Thinkpad X230
Sure, if you want to linger onto old versions of the OS, but once Apple quits supporting Intel it will be over.
So maybe I'm calling it early, but it will at some point be pointless to continue running the old Intel systems.
Can’t emulate or spoof m series chip?
Not at any usable level of performance. It's a completely different hardware architecture.
with what? the m series is everything on the chip. you're suggesting an Intel CPU an Nvidia GPU and a bunch of RAM sticks to be emulated to present itself to the OS as a single device?
Not sure. I assumed there could be some way to lie to the os like how you can spoof other metadata about the device to other software.
Hackintosh still exists. macOS 16 will be officially the last x86 supporting release.
But I think it's point, the performance of Hackintosh is terrible for many reasons as its all a hackjob.
Performance was very, very good in my experience. Benchmarks normally took a 10% hit vs their equivalents on windows, but being able to run macOS on arbitrary consumer hard made performance incredibly cheap. My first proper bang-for-buck machine was an i7-4790k with an R9 270x GPU, 16GB of RAM, and a combination of SSD and HDD storage. Total cost was around $1300 CAD if I remember correctly, which is absurdly cheap compared to what you’d have to pay at the time for a Mac with that performance. I also ran macOS on a 2x E5-2670 machine with 64GB of RAM, as well as a 2x E5-2697 v2 machine, and an i9-12900k machine with an RX 6950XT GPU, all of which were incredible value compared to an off-the-shelf Mac. It’s only recently that Macs are catching up to hackintoshes performance-to-dollar wise, because Apple Silicon is very, very good. Once I get my WRX90 workstation hackintoshed it should give the Mac Studio and Mac Pro a run for their money, but not for much longer if Apple drops support for x86 after macOS 16.
The Hackintoshes I've built were much better performance for price compared to equivalent official model. It just took a lot longer to get them up an running. We were building for production machines vs personal use, so things like Messages, AppStore, etc that could be tricky to get to work were just not something we cared about.
I ran Hackintoshes for many years. Performance on a $1500-2000 Intel platform was always extremely good (certainly better than any Mac I was willing to shell out for and sometimes better than any Mac that was sold).
That time period of the trash can mac saw a lot of people looking to have a useful computer and Hackintosh was the only way. We had systems with multiple GPUs that blew the doors off the trash can's AMD multiple year old GPUs. Then, when the new GPUs came out, Hackintoshes just upgraded while the trash can just sat their all sad in how useless it was.
The people involved in making the Hackintosh possible should be immortalized in stone carvings to be remembered for all of time.
> It's pure robbery on Apple's part. Completely beyond the pale now. Their ridiculous RAM and storage prices were never that big of a deal back in the PowerBook/early Macbook Pro days, because you could always opt out if you were a tiny bit handy with a small screwdriver (my 2008 unibody lets me swap storage with 1 screw, swap a battery with zero!). Now? It's unforgivable. I don't care about soldered RAM, I get it, but it is despicable charging as much as the entire computer to upgrade the RAM a paltry 16GB.
For what it's worth, I completely agree with you.
But.
I suspect that Apple isn't solely doing this for profit. Apple's pricing structure aggressively funnels people into the base config for each CPU.
Thinking about getting an M4 with upgraded ram? A base config M4 pro starts to look pretty good.
In practice, this means that Apple's logistics is dramatically simplified since 95% of people are ordering a small number of SKUs.
> There's profit, and there's actively making your entire product experience worse in pursuit of profit.
It was really egregious when the base config only came with 8 GB of ram. I'll admit that storage can be a bit tight depending on what you're trying to do, but at least external storage is an option, however ugly and/or inconvenient it may be for some.
Don't want to deal with the logistics of lots of SKUs? Don't sell them. Trying to upsell people is a money move. Selling a SKU where the 80+gb OS is like 40% of the disk is a good SKU to cut. Especially if some consumers are unlikely to realize how little space they will actually have.
> Don't want to deal with the logistics of lots of SKUs? Don't sell them. Trying to upsell people is a money move. Selling a SKU where the 80+gb OS is like 40% of the disk is a good SKU to cut.
This isn't a profitable move from Apple's perspective - they try to keep the base unit at about the same price across generations. That's what happened when they moved from 8 GB of ram to 16 GB.
The idea is to then also funnel them into icloud drive plans
Tangential, just based on a funny coincidence noticeable in the article: What do all these M’s stand for, anyway? I guess the M.2 might be inherited from the m in mSATA and mPCIe(?).
For Apple… they had A for for their cellphone chips, which vaguely made sense because they were the only chips Apple made at the time. But then, M for their laptop chips? M as in… mobile, or mini? But they use it in their Macs Pro, including their workstation-y ones…
M as in Mac
This isn't particularly true anymore since some iPad models are also shipping with M processors.
Oh. I’m an M as in dummy, haha.
I made a 16TB external SSD RAID0 using four 4tb SATA drives and a Terramaster external enclosure... for less than $700 [1]
Not as fast as a single nVME in external Acadis enclosure... but it is fast.
[1] https://www.amazon.com/TERRAMASTER-D4-320-External-Drive-Enc... [sold out]
Not worth the hassle and the faffing. Just pay Apple their tax. Your time is far more valuable. And if it’s not then you have bigger fish to fry.
> Your time is far more valuable.
Damn I wis--
> And if it’s not then you have bigger fish to fry.
You make it sound like anyone in tech that isn't making giant piles of money screwed up their career.
And if I take that literally, wouldn't I have to be making at least a thousand dollars an hour?
This should even hold for non-US salaries? This is a machine that enables you to work for about four years. What’s that 200k € in /median/ EU wages. Penny pinching. The thing is that consumer and prosumers vary and everybody wants to drive a Porsche to work And to leisure.
Not blaming anyone for wanting a machine like this. Trying to point out that tech has become so accessible that we all aspire to have a supercomputer as our daily driver.
When I was young a PC (xt and on) would set my dad back about a monthly wage. What I see is a huge compression of the price range. But the upper part of the range still exists (training LLM is not much different from the central computer at universities in the 70s/80s).
I think an EU salary of 200k/year is at least uncommon if not outright rare and definitely not median. At least in the tech space, maybe in finance it's more common.
GP suggested 200k over 4 years, which is pretty reasonable.
That makes far more sense in that case. Though I would have to clarify that 200k/year is attainable in terms of total compensation, just not salary per year on average.
> This is a machine that enables you to work for about four years. [...] Penny pinching.
But that wasn't the argument. In "your time is more valuable", the time is what it takes to remove a dozen screws, replace a card, and format it. Plus any increased risk of data loss, but that should also be quite small if it exists. So for saving hundreds of dollars or more, your expected time is like an hour if you have backups (you'd better have backups!), hard to say for sure if you don't.
Dunno... $500 for 30 minutes of fun work?
To be fair, I did this upgrade and actually ended up wasting several hours because the first SSD failed after a few weeks.
Right...so $500 + the risk of having to spend hours to deal with problems rather than taking it to an Apple store and having them deal with it (assuming you live close enough to one).
Obviously, the tradeoffs are different for everyone.
That is a sensible attitude, but some of us welcome an excuse to get out the box of tools and take something physical apart.
I better get the skillet out then.
Although yes I didn't buy a Mac because of this.
Or just avoid Apple products to save money and time
Apple won't upgrade the storage for you aftermarket, as far as I'm aware. There's no tax you can pay them to take your current machine and bump the spec.
Frankly, this is exactly the sort of head-up-ass attitude that will end with Apple being smacked around by investigatory commissions like what happened to John Deere and Microsoft.
> Apple won't upgrade the storage for you aftermarket
Not only that, they won't repair devices with third-party hardware. If my Mini has an issue, I'll have to remove the new SSD and reinstall the OEM one before I drop it off. I experienced this when tried to get my 2012 MacBook Pro fixed (wet keyboard).
They did the replacement, but I learned how to do it myself, including replacing the keyboard again, another SSD upgrade, and eventually a battery upgrade.
I'd rather start with replaceable batteries in smartphones.
You can replace the battery on an iPhone yourself these days. Apple's terrible design makes the process involve shipping specialized hardware to your joke, for which you need to hand over a good chunk of change in collateral to be able to use, but it can be done.
My suspicion for their shitty process is that it was set up purely so Apple can tell regulators "see, consumers can't be trusted to replace their own batteries, look what it takes", but they do offer a programme for it.
The stupidest part about the whole thing is that the official URL looks like a total scam: https://selfservicerepair.com/en-US/home
It was also more expensive than Apple’s authorized repair last time I checked.
I was provided the $699 M4 Pro 4TB SSD upgrade by M4-SSD. It's quite expensive (especially compared to normal 4TB NVMe SSDs, which range from $200-400)
Depends what type of flash that's comparing. QLC is cheap, TLC a bit more expensive, MLC nearly unobtainable, and SLC insanely expensive unless you SLC-mod a QLC drive.
While you're at it add the USBC power hack https://github.com/vk2diy/hackbook-m4-mini
I've been traveling for business with this as my sole machine for 3 months straight and it has proven to be an excellent system.
> Fix the cables in place. This can be very fiddly. It helps greatly to have a fine pointed set of tweezers to assist with placement, bending and the application of pressure whilst screw-down is underway. Take your time and try to get all the cable core under the screw or at least a fair amount.
If you do this mod, you should really use crimped ring connectors instead of just hooking the power cables around the screws. It greatly reduces the risk of pull-out since the screw retains the connector, which also means less chance of shorts and a much easier install. Also since the terminals are uniform and flat, you get much more even clamping. I would also add heat shrink over the crimp.
I don't have a Mini so can't comment on the right size to buy, but you can buy ring terminals in practically any diameter for next to nothing:
https://www.digikey.com/en/products/filter/terminals/ring-co...
I wonder why Apple left those two large power pads? They don't look like typical test points.
Are the populated from the existing PSU input or just there in case anyone wanted to mod it?
They probably use them in production as test jig connects for passing power. They are vertical inter-board rails. When making physical connections for high current contacts it pays to have a larger surface area in case there is a poor connection as substantial draw may occur for short periods. Also, such surfaces may degrade over time, so extra surface area is desirable.
What is the benefit over a macbook in this case?
The linked repo has a pretty good rundown of possible reasons:
> If non-square screens on Macbook Pros make your blood boil with rage
> If you can't afford or don't want to pay for a Macbook Pro (smart choice)
> If you have ergonomics concerns with shrinking laptops and one size fits all keyboards
> If you like your systems to be repairable and modular rather than comprised of proprietary parts shoehorned in to a closed source design available only from a single vendor for a limited time
> If you are blind (and don't want to carry a screen around)
> If you want to use AR instead of a screen and therefore prefer to be untethered
> If you are on a sailing ship, submarine, mobile home, campervan, paraglider, recumbent touring bicycle, or otherwise off-grid
> If you want a capable unix system to power a mobile mechatronic system
I'd add in not having to deal with a Macbook in clamshell mode doing stupid crap like forcing you to double-tap the touchID button sometimes, refusing to connect to external keyboards and mice on wake, and some of the other annoyances I have dealt with.
Also, a Mac Mini is small, and a MacBook is not, at least as a function of "desk area" vs "area consumed".
> If non-square screens on Macbook Pros make your blood boil with rage
> If you are blind (and don't want to carry a screen around)
> If you want to use AR instead of a screen and therefore prefer to be untethered
You can just remove the screen! My M1 Air works just fine at least. (I’ve broken the screen, but if you just don’t need screen at all, you can sell the top half assembly and save some money.)
Well, some of those are reasonable. It's pretty hard to look at a 16 inch model and complain about "shrinking laptops".
Mini is $600
MB Air ($1100+ / >1.8x) is only available 15" which is IMHO too small for long term comfortable use.
MB Pro ($2300+ / >3.8x) is 16" which is IMHO still a bad ergonomic experience. I'd sooner buy a mini, trash it, buy another one 4 times. Especially given they are improving annually.
Price was a different bullet point.
1. cheaper 2. different form factor 3. more choice of battery/kb/mouse/screen/camera 4. not landfill when you have to replace battery/kb/mouse/screen/camera 5. doesn't have an annoying chunk out of the screen 6. doesn't have a video camera pointed at you all the time 7. keyboard that suits large hands 8. keyboard in preferred layout 9. not subject to apple tax on most components/upgrades
Weird product.
For the desktops you can always just plug in an external drive.
That said, SSDs eventually have to go bad.
This is probably more important as a RTR( right to repair ) issue.
Are there any suggestions on upgrading the storage of an M1 Macbook Pro? Even with the 1T version, I'm feeling the pinch.
If you're okay to void warranty there are repair shops that will solder larger NANDs for you. It's not cheap though.
Unfortunately it doesn't have a socketed SSD but one which is soldered on, so while it can be done by a technician it's extremely technical.
rats, what I thought. Ahh, well.
I’m surprised and happy that this is possible.
> It's still more expensive than a normal nvme, but not by too much.
It's double the price, double is too much.
$1200 for 4TB upgrade is so ridiculous. Manufacturers holding RAM for ransom is very annoying. Esp when the lowest setting isn't even meant to be purchased, and the specs are so low they will underperform, or be obsolete in a few years.
This is kind of why people start cloning macs in the 90s. They were too expensive straight from the factory.
The people behind the "kingsener" YT channel have been doing these types of upgrades for a long time.
He recently posted an upgrade of this same process as a short - https://m.youtube.com/shorts/b-Z5GhYhbjM
It’s wild to see how much Apple invests in making these as hostile to the user to upgrade. But also cool to see people out there with the skills to desolder the chips, memory, and storage and replace with a much faster alternative.
If Apple truly cared about their carbon footprint, devices would be easily serviceable and upgradeable by user
Apple solves carbon footprint by making devices that you will want to use for at least 10 years.
Shame that they don't support the device for 10 years too.
You mean the company that had several generations of those terrible laptop designs that made you rip out the whole chassis when your keyboard became unusable after dust got into the keys?
New in box after having been stored in a warehouse for twenty years maybe. Apple isn't any greener than any of their competitors.
Being relatively greener than a trash-tier dell laptop doesnt make you a green supplier in absolute terms...
Comparing the speeds of a new flash device and an old, used one will typically not be valid unless steps are taken to condition the new device into a steady operating state.
What might those steps be?
Putting the new one through an equal amount of use that the old one saw, because SSD controller firmware is unpredictable and many SSDs see reduced performance with time.
nicely done
[dead]
So you pay $700 for an SSD that otherwise retails for $200 and then do an "unauthorized" modification of your own computer and void the warranty to install it, but that's still preferable because it otherwise costs $1200 directly from Apple. The Apple tax is really something else.
> an "unauthorized" modification of your own computer and void the warranty to install it
Citation needed. This modification doesn't look to me at all like it'd void the warranty unless you damage the machine while you do the installation.
If you need to make a warranty claim, you should of course reinstall the factory one before you do so, since the vendor doesn't expect users to replace that and won't have any practices of looking/removing so they can return it to you if you take your machine in for service with a non-Apple card there.
But voiding your warranty for this has been roundly rejected, in the US at least, as long as you don't damage your equipment by doing it.
Apple even provides a support page with details on the process, for example: https://support.apple.com/en-us/121508
I completely quit buying Apple devices all togehter, but I still occasionally check their website. The SSD upgrade prices are ridiculous and funny, especially since I keep meeting people that are convinced that Apples SSDs are somehow magically better than my 60 EUR Samsung M.2 and the price is hence justified.
The upgrade prices - 13" MacBook Air: 256GB to 512GB -> 256GB for 250 EUR
- 14" MacBook Pro: 512GB to 1TB -> 512GB for 250 EUR
So the Air upgrade is twice the price for what is - as far as I was able to figure out - the same hardware?
Double check the SoCs. The base model MacBook Air has a slower GPU than the base model MacBook Pro and the upgraded MacBook Air. When shopping for Apple products, you need to compare every number on the spec page individually because Apple is scared of model numbers.
For a while, some MacBooks also had slower disks because some capacities used one NAND chip while others used two. I believe they stopped doing that for their latest models, though. That kind of fuckery means you need to look up benchmarks for each individual model, because the performance differences aren't clear from the product description.