Not too surprising given what I've seen of their vendor sdk driver source code, compared to mt76. (Messy would be kind assessment)
Unfortunately, there are also some running aftermarket firmware builds with the vendor driver, due to it having an edge in throughput over mt76.
Mediatek and their WiSoC division luckily have a few engineers that are enthusiastic about engaging with the FOSS community, while also maintaining their own little OpenWrt fork running mt76.[1]
Why is it so much of this hardware/firmware feels so much like deploying a PoC to production? Why can't they hire someone that actually knows what they are doing?
The consumer space is brutally competitive - you're working on tight margins and designs become obsolete very quickly. MediaTek's business is built on selling chips with the latest features at the lowest possible price. Everything has to be done at a breakneck pace that is dictated by the silicon. You start writing firmware as soon as the hardware design is finalised; it needs to be ready as soon as the chips are ready to ship. These conditions are not at all suited to good software engineering.
In an ideal world, consumers would be happy to pay a premium for a device that's a generation behind in terms of features but has really good firmware. In the real world, only Apple have the kind of brand and market power to even attempt that.
> you're working on tight margins and designs become obsolete very quickly.
This seems like the exact place where open source is a competitive advantage.
Step 1, open source your existing firmware for the previous generation hardware. The people who have the hardware now fix problems you didn't have the resources to fix.
Step 2, fork the public firmware for the previous generation hardware when developing the next generation. It has those bug fixes in it and 90% of the code is going to be the same anyway. Publish the new source code on the day the hardware ships in volume but not before. By then it doesn't matter if competitors can see it because "designs become obsolete very quickly" and it's too late for them to use it for their hardware/firmware in this generation. They don't get to see your next generation code until that generation is already shipping. Firmware tricks that span generations and have significant value can't be kept secret anyway because any significant firmware-based advantage would be reverse engineered by competitors for the next generation regardless of whether they have the source code.
Now your development costs are lower than competitors' because you didn't have to pay to fix any bugs that one of your customers fixed first, and more people buy your hardware because your firmware is less broken than the competition.
What happens in that case is that competitors copy your hardware and throw the open source firmware on it to undercut you. Consumers don't know how to differentiate your products without marketing/segmentation and OEMs mostly care about the BOM cost. It doesn't matter much that your competitors are 2-6 months behind because they're still killing the long tail sales that sustain a company.
Note that I'm still pro-open source, but I've seen this cycle play out in the real world enough times to understand why manufacturers are paranoid about releasing anything that might help a competitor, even if it benefits their customers.
> You start writing firmware as soon as the hardware design is finalised; it needs to be ready as soon as the chips are ready to ship.
On top of that, there's bound to be errors in the hardware design, no modern technology even comes close to being formally proven correct, it's just too damn complex/large. Only after the first tapeout of an ASIC you can actually test it and determine what you need to correct and where to correct it (microcode, EC firmware, OS or application layer).
Indeed. A friend who is more plugged into such things me told me 4-5 years ago they laid off most of the senior Intel network driver team. Basically the only edge they had. I can’t imagine things are any better these days.
Inertia is a hell of a thing, but you are starting to see the cracks form. I just don’t know if there is an alternative.
When? The Intel X710 series of network cards was released in 2014, and it wasn't until ~2018 that it became actually usable (end of 2018? I don't recall really, but when I stumbled upon it it had already been a public problem for more than a year, and it took a few more months for patches to come).
I'm talking things like full OS crashes while doing absolutely nothing, no traffic whatsoever or even better, silently starting to drop all network traffic (relatively silently, just an error message in the logs, but otherwise no indication, the interface still shows up as fine and up in the OS). It was all a driver issue (although both Intel drivers didn't work, so not only) that was later fixed.
After that, it was rock solid. But the fact that there was a high class network card sold for lots of money, on hardware compatibility lists at various vendors, which didn't work at all for pretty much everyone for more than a few years is disgusting.
Back at the start of the century, Intel networking cards were the 'Best reliability for the dollar' and for some reason had a grudge against Linksys even before the Cisco buyout [0]. Same for most of their B/G Wireless stuff.
>Why can't they hire someone that actually knows what they are doing?
Because those employees cost a lot of money and these commodity widgets have razor thin margins that don't enable them to pay high salaries while also making enough profit to stay in business.
You can pay more to hire better people and put the extra cost in the price of the product but then HP, Lenovo, Dell, et-al aren't gonna buy your product anymore, they're gonna buy instead from your competition who's maybe worse but provides lower prices which is what's most important for them because the average end user of the laptop won't notice the difference between network cards but they do check the sticker price of the machine on Amazon/Walmart and make the purchasing decision on that and stuff like the CPU and GPU not on the network card in the spec sheet.
I feel like there's an opportunity for a joke here somewhere along the lines of hardware companies being really terrible at writing software, while software companies being just a normal amount of terrible at writing software.
A few attempts with chstgpt managed it: "Hardware companies writing software is like watching a train wreck in slow motion. Software companies? They just crash at regular speed."
Hardware manufacturers see software as a cost center, it’s often made as cheaply as possible. And hardware engineers aren’t necessarily good software developers. It isn’t their main expertise.
The wording of the headline is a bit misleading here. I followed the link thinking it might be a firmware or silicon bug as I have a couple of routers at home with mt76 wifi, but was relieved to find it's just a bug in the vendor's 'sdk' shovelware. I'm baffled that anyone even thought about using that, given there's such good mt76 support from mainline kernels with hostapd.
> I'm baffled that anyone even thought about using that, given there's such good mt76 support from mainline kernels with hostapd.
Not sure if you noticed but the OpenWRT 21.02.x series (based on mainline kernel 5.4 series) is affected, and these guys generally know their game when it comes to wireless on Linux. So much so that I think the mainline kernel mt76 driver is actually maintained by an OpenWRT developer.
Maybe MediaTek has shipped some modified versions of OpenWRT using this "wappd" thing to their B2B customers (as part of the SDK perhaps?) and are now advertising those as vulnerable.
Yes, I'm assuming that's exactly why OpenWrt is mentioned but it's very misleading.
The OpenWrt folks generally have good enough taste not to ship any drivers or userspace junk from vendor SDKs, though they do have a fair-sized set of backport patches on top of the (somewhat elderly) mainline kernels they do ship.
I'm running up-to-date mainline on my routers, not OpenWrt kernels. The mt76 support in 6.11 (and previously in 6.9 and 6.10) is complete enough that I don't need to carry any patches at all over what's in Linus' tree.
The wappd service is primarily used to configure and coordinate the operations of wireless interfaces and access points using Hotspot 2.0 and related technologies. The structure of the application is a bit complex but it’s essentially composed of this network service, a set of local services which interact with the wireless interfaces on the device, and communication channels between the various components, using Unix domain sockets.
On the bright side, it doesn't sound like this is in baseband firmware but instead in a "value add" service that isn't 100% necessary to the functioning of the WNIC itself.
This reminds me of how some devices come with driver packages that include not just the actual driver software that's usually tiny and unobtrusive, but several orders of magnitude larger bloatware for features that 99% of users don't need nor want. Printers and GPUs are particularly guilty of this.
Is there some logic to MediaTek's naming conventions, or all their devices just MTxxxx where x is some incremented/random number?
I have a device with a mt6631 wifi chip and I'd assume it's unaffected just because it's not mentioned as affected anywhere, but it's hard to tell where it might fit into the lineup.
They say that OpenWrt 19.07 and 21.02 are affected, but as far as I can tell, official builds of OpenWrt only use the mt76 driver and not the Mediatek SDK.
I've been buying laptops with AMD CPU's but they always come with these trash MediaTek RZ616 Wi-Fi cards, why is that? I've been replacing them with Intel Wi-Fi cards, now I have a pile of RZ616 cards ready to become future microplastics :-(
Intel sells two versions of their WiFi cards: ones ending in 1 use CNVI protocol and work only with Intel chips. These are sold really cheap to OEMs; ones ending in 0 use standard PCIe and are sold to OEMs for ~$10 more.
AMD decided to brand Mediatek's MT7921 and MT7922 as RZ608 and RZ616 to have something to sell to OEMs at the same price point as Intel's xx1 chips.
Lenovo grew unhappy with MediaTek as well and started soldering down Qualcomm chips for WLAN on their AMD platforms only to be burned by buggy firmware/driver interactions on Linux (which they officially sell and support).
And Qualcomm stretches themselves rather thin on the mainline kernel side once a chipset generation is no longer the latest. It takes a tremendous amount of vendor pressure to make Qualcomm do anything these days.
iwlwifi has its own set of problems, biggest being no AP mode (on 5 Ghz). Also intels firmware license is more restrictive than mediateks, and being fullmac the firmware does lot more of the heavy lifting; I personally prefer softmac more. There simply aren't that many great options out there, gone are the golden days of ath9k.
Have you tried them further than "I don't trust MediaTek"?
I've had sequentially an Intel and an AMD ThinkPad for work (I killed the first one). Turns out, the wifi is much much better on the AMD one with the MediaTek chipset than on the Intel one with the Intel chipset. On the latter, I had very frequent disconnects from the network (severals per hour) along with atrocious latency even on 5GHz. And by atrocious latency, I mean atrocious as in "it is more than noticeable when using ssh". The current one has been rock solid for the past two or three years.
So yeah, I guess it really depends. The specific chipset I have is a MT7921 and I'm running Linux, YMMV. And it also may depend on the laptop itself.
The laptops run Windows. It's not about trust, they just don't work well. They take a long time to connect, particularly after returning from sleep. Many times I have to disconnect and reconnect manually after returning from sleep. I replace them with Intel Wi-Fi ones and it just works. I'd really rather just replace them than face user complaints.
Urgh. Kudos to you, that’s no easy task. And this might explain a lot, I’ve never liked the WiFi experience in windows. I find Linux with IWD to be stable as hell
IIRC my phone uses a MediaTek chipset. And I vaguely remember the vendor has moved away from MediaTek since because of the ahem quality of those products...
No idea how WiFi is done on a phone though. Is there a way to find out whether the phone is affected? I hardly ever use WiFi because I have unlimited cellular data and good coverage, but would still be good to know.
Sure, it's annoying and pedantic to nitpick someone's helpful answers and snippets. But sometimes the goal is to foster a better understanding, and reduce superstition and cargo-culting.
"sync<CR>sync<CR>sync<CR>reboot<CR>" became an idiom for various reasons. Then it became "sync; sync; sync; reboot" and deemed equivalent, which was missing the point that physically pressing "enter" introduced some human-length pauses into the process. Then "reboot(8)" incorporated those syncs, and "shutdown(8)" provided more flexibility, but the idiom persisted.
And to this day, the tier-1 support script says to clear your cookies and cache, try a different browser, factory reset your phone, disable firewall, disable AV, reboot your router, bypass your switch and plug in one device directly, wait 15 minutes and try again, and we've abandoned all attempts to understand, diagnose, or find root causes when the underlying systems are too diverse and complex to understand or keep your tech's expertise up-to-date.
Strawman--nobody said that. All I suggest is that if you wish to accomplish a particular task, then read the manual at that point, and find relevant ways to invoke the command.
The options I've memorized over the years are the ones I've used the most often, and this inertia can lead to ignorance, unless I periodically revisit the manual page to see what else can be done.
Every IT/CS instructor will tell you that your source of truth is the vendor's documentation. Don't waste your time Googling Stackexchange when the manual pages are available right on the system, on a website, or however. The manual pages are written by the developers and tech writers to specifically tell you how to use these commands.
You can either "cat for clarity" for the rest of your career, or you can learn new methods like shell redirects, "tee(1)", "exec <file; while read var; do cmd; done". I wouldn't be surprised if people start their careers thinking that "cat" is just for starting up a pipeline. Other students may be taught that "cat" is an elementary way to just put file contents on their terminal screen, and then they'll subsequently learn how "more(1)" is superior in this regard.
"When all you have is a hammer, everything looks like a nail."
I am not sure how passwords are relevant to the pointless chaining of two distinct commands, rather than invoking a straightforward "sudo -s".
People writing "sudo su" are simply imitating a common StackExchange idiom without knowing why.
"su" requires passwords unless invoked by root. "sudo" may be configured to permit/deny specific commands. So if you write that, then you're saying 'become root via the sudoers(5) config and then fork, exec, become root again via the setuid binary "su", in order to run an interactive shell.'
It's a poor habit to be promoting, because it assumes things about the configuration and suggests that "su" is equal to other particular "sudo" maintenance commands, when the point is simply to drop into a root shell, which is a facility provided directly by "sudo", if you'd only read the manual page and learn its options.
Nothing will stop you from invoking "sudo -s" without a password, without another fork/exec, without another suid utility carrying a significantly different authentication model.
Doesn't "sudo -s" keep the current user's environment variables as opposed to "sudo su"?
I admit that in the context of doing "ls /sys/module" it's likely not a huge problem, but I do (in my gut) feel that for running an elevated command, it's cleaner to just drop in as actual root, instead of masquerading as root.
> Depending on the security policy, the user's PATH environment variable may be modified, replaced, or passed unchanged to the program that sudo executes.
Essentially a "maybe, depending on what your OS policy is", proving that your comments are less than helpful.
Well, you are moving the goalposts like crazy, considering you were just executing a simple command with your original example, but let me search it and demonstrate the rest of it, rather than your Ctrl-F find one option in question:
-E, --preserve-env
--preserve-env=list
-H, --set-home
-i, --login
-s, --shell
The sudoers policy subjects environment variables passed as options to the same restrictions as existing environment variables with one important difference. If the setenv option is set in sudoers, the command to be run has the SETENV tag set or the command matched is ALL, the user may set variables that would otherwise be forbidden.
I am not sure what is meant by "masquerading as root" -- effective UID rather than real UID? "sudo" should set both to the target; there is no masquerading, even if you end up in a root shell with features of the invoking user's environment, those relevant variables should've been adjusted in the process.
So what you're proposing to do is to escalate privilege using "sudo"s security model and configuration, which may add, suppress, or alter environment variables, as well as SELinux and resource limits and cgroups or whatever, and then have a second go-round through "su" may alter the environment further, making for an unpredictable interaction. Hopefully it's all harmonized through PAM, but all you wanted is an interactive shell. Why try to justify this copy-paste idiom?
In fact, I could rewrite your original snippet as
sudo ls /sys/module
Why are you even opening an interactive shell to do one simple command? If that's all you want, then learn and use the appropriate idiom for it. "sudo(8)" was originally designed to run one-off commands without invoking that shell. In fact, security experts will tell you not to leave root shells open at any time. If you can run a "sudo command" and return to your user shell, then that is best practice.
i still cannot fathom why in this day and age where people buy any silicon that's available, these C tier vendors don't adopt the PC strategy and completely open their firmwares for open source community.
It feels like they're using software as a solution to a hardware problem.
No matter what the software says, or what keys it has set, the hardware should still be hard-configured to honour regional power output limits. This could be something like a block of DIP switches under the cover, so if a user unbolts the case, finds the switch, and toggles it to some country with looser requirements, it's obviously going against manufacturer advice and washes their hands of liability.
Making the code available doesn’t necessarily mean that you can actually flash the image since it can be cryptographically locked down. Or even you support flashing but only let you do certain trusted operations from a signed image.
Honestly, if you can't update the firmware you're in the same situation... knowing that you have a critical vulnerability and unable to fix it.
Enforcing trusted operations is definitely more work than they are going to do (if it's even possible to "do this right").
In a semi-ideal world, I would look for a vendor that permits only certain ops from a flashed image and hope that their crappy "restriction enforcing" code is also riddled with vulnerabilites so it's really just "follow the rules please".
going the pc route is fully embracing your hardware accept whatever software the user wants. not throw unbuildable source somewhere and make it impossible to use. that's the faux open source we have today when someone must comply with the gpl or something
I think you happened to miss the point about regulatory requirements that make this difficult/impossible to accomplish for the radio vendor. I think the proliferation of SDR is the only hope to change the broader regulatory culture but until that happens you're not going to see a shift.
I think it's also rich calling GPL compliance faux open source. There really is no true Scotsman.
Manufacturers of radios have to prevent the ability to behave in a non-compliant manner. One way of accomplishing that is preventing the user from updating software to non-official versions. Another is to prevent the small subset of functionality to be updated by non-official versions. This isn't a new requirement and has been around since forever.
that point is completely bogus since hardware oscillators limit the range. and even multi range devices let the driver decide the region, so even with closed source you can already offend fcc regulations (pro tip set your wifi region to cuba for extra channels)
Hardware oscillators don't limit anything, PLL ranges and amplifier band characteristics do, and they have soft falloffs. Please stop making claims you have little knowledge around.
(no, a PLL is not considered a "hardware oscillator" in any practical sense by anyone working in this area, that's "tomatoes in a fruit salad" category)
As a contributor to OpenWrt it makes me wonder why don't people differentiate between OpenWrt and various proprietary vendor SDKs. No one would have referenced Fedora if there was a bug in Nobara.
There is a better middle ground here by saying the company that made it may not have known, but nation state threat actors most likely do.
When you see actors at this level set up manufacturing thousands of explosive filled devices at very high production quality, inserting some compromised things like printers or routers in a company network wouldn't be and shouldn't be a surprise.
If the nation state actors did intentionally backdoor it, then they would have wanted to make it look like incompetence. Here’s a link to the Simple Sabotage Field Manual from the US. It worked well in occupied Europe during WWII:
I don't think that link is necessarily better just because it's the original source. The linked article gives a concise overview, while the blog post spends the first paragraph talking about moving and starting a new job.
In general, I would wager that HN prefers intellectual curiosity over overviews. Submission guidelines infer that by stating "Please submit the original source. If a post reports on something found on another site, submit the latter."
Sure, though I'd argue in the case of vulnerabilities an overview is particularly valuable. Not everyone wants to dive into the details; in my case what I'm most interested in is whether I (or anyone else at my day job) might be affected.
I would agree. I would also say that when the secondary article contains a lot of value added above, the original, such as is the case here, the secondary source is better because it is easy to follow its link to the original if that's what you'd like to see.
I definitely agree with the guideline around favoring original sources, but this seems like a good time to deviate.
Their exploit development process is interesting, and I like to think I'd have done something similar (that is, compiling an easier-to-exploit version of the application and gradually working up to the real thing)
Really? I think an extraordinary claim like "eliminating a whole class of problems makes applications no more secure in general" should also come with extraordinary evidence.
The point of the person you're replying to is that JVM software has far fewer vulnerabilities than it would have otherwise.
The number of CVEs reveals that there is a lot of Java software and that there's a strong culture of importing dependencies. But we also care about the nature of them, the normalized relative frequency of very serious flaws like RCE exploits.
A CVE list says nothing. I made my own language which has no CVEs, that obviously doesn't mean it's secure. The relevant metric is "CVEs per unit of functionality".
This is a nonsense statement unless you note the Java runtime. Java is a language. The runtime is the software that runs the Java code. There's more than one runtime.
My anecdotal experience with mediatek wifi is it's a very flakey, low quality brand. That might be more of the reason. The firmware is probably unpolished, rushed, not maintained by competent people.
I would like to remind people of the 2016 Adups backdoor:
> According to Kryptowire, Adups engineers would have been able to collect data such as SMS messages, call logs, contact lists, geo-location data, IMSI and IMEI identifiers, and would have been able to forcibly install other apps or execute root commands on all devices.
Not too surprising given what I've seen of their vendor sdk driver source code, compared to mt76. (Messy would be kind assessment)
Unfortunately, there are also some running aftermarket firmware builds with the vendor driver, due to it having an edge in throughput over mt76.
Mediatek and their WiSoC division luckily have a few engineers that are enthusiastic about engaging with the FOSS community, while also maintaining their own little OpenWrt fork running mt76.[1]
[1] https://git01.mediatek.com/plugins/gitiles/openwrt/feeds/mtk...
Why is it so much of this hardware/firmware feels so much like deploying a PoC to production? Why can't they hire someone that actually knows what they are doing?
The consumer space is brutally competitive - you're working on tight margins and designs become obsolete very quickly. MediaTek's business is built on selling chips with the latest features at the lowest possible price. Everything has to be done at a breakneck pace that is dictated by the silicon. You start writing firmware as soon as the hardware design is finalised; it needs to be ready as soon as the chips are ready to ship. These conditions are not at all suited to good software engineering.
In an ideal world, consumers would be happy to pay a premium for a device that's a generation behind in terms of features but has really good firmware. In the real world, only Apple have the kind of brand and market power to even attempt that.
> you're working on tight margins and designs become obsolete very quickly.
This seems like the exact place where open source is a competitive advantage.
Step 1, open source your existing firmware for the previous generation hardware. The people who have the hardware now fix problems you didn't have the resources to fix.
Step 2, fork the public firmware for the previous generation hardware when developing the next generation. It has those bug fixes in it and 90% of the code is going to be the same anyway. Publish the new source code on the day the hardware ships in volume but not before. By then it doesn't matter if competitors can see it because "designs become obsolete very quickly" and it's too late for them to use it for their hardware/firmware in this generation. They don't get to see your next generation code until that generation is already shipping. Firmware tricks that span generations and have significant value can't be kept secret anyway because any significant firmware-based advantage would be reverse engineered by competitors for the next generation regardless of whether they have the source code.
Now your development costs are lower than competitors' because you didn't have to pay to fix any bugs that one of your customers fixed first, and more people buy your hardware because your firmware is less broken than the competition.
What happens in that case is that competitors copy your hardware and throw the open source firmware on it to undercut you. Consumers don't know how to differentiate your products without marketing/segmentation and OEMs mostly care about the BOM cost. It doesn't matter much that your competitors are 2-6 months behind because they're still killing the long tail sales that sustain a company.
Note that I'm still pro-open source, but I've seen this cycle play out in the real world enough times to understand why manufacturers are paranoid about releasing anything that might help a competitor, even if it benefits their customers.
> You start writing firmware as soon as the hardware design is finalised; it needs to be ready as soon as the chips are ready to ship.
On top of that, there's bound to be errors in the hardware design, no modern technology even comes close to being formally proven correct, it's just too damn complex/large. Only after the first tapeout of an ASIC you can actually test it and determine what you need to correct and where to correct it (microcode, EC firmware, OS or application layer).
Intel networking used to have the expensive and works traits. Not confident their current products would be as good.
Indeed. A friend who is more plugged into such things me told me 4-5 years ago they laid off most of the senior Intel network driver team. Basically the only edge they had. I can’t imagine things are any better these days.
Inertia is a hell of a thing, but you are starting to see the cracks form. I just don’t know if there is an alternative.
When? The Intel X710 series of network cards was released in 2014, and it wasn't until ~2018 that it became actually usable (end of 2018? I don't recall really, but when I stumbled upon it it had already been a public problem for more than a year, and it took a few more months for patches to come).
I'm talking things like full OS crashes while doing absolutely nothing, no traffic whatsoever or even better, silently starting to drop all network traffic (relatively silently, just an error message in the logs, but otherwise no indication, the interface still shows up as fine and up in the OS). It was all a driver issue (although both Intel drivers didn't work, so not only) that was later fixed.
After that, it was rock solid. But the fact that there was a high class network card sold for lots of money, on hardware compatibility lists at various vendors, which didn't work at all for pretty much everyone for more than a few years is disgusting.
Back at the start of the century, Intel networking cards were the 'Best reliability for the dollar' and for some reason had a grudge against Linksys even before the Cisco buyout [0]. Same for most of their B/G Wireless stuff.
[0] - That came up once.
you'd be glad to hear the e810 series is not better in this regard. at least the out-of-tree driver somewhat works, and supports more than 1 queue.
>Why can't they hire someone that actually knows what they are doing?
Because those employees cost a lot of money and these commodity widgets have razor thin margins that don't enable them to pay high salaries while also making enough profit to stay in business.
You can pay more to hire better people and put the extra cost in the price of the product but then HP, Lenovo, Dell, et-al aren't gonna buy your product anymore, they're gonna buy instead from your competition who's maybe worse but provides lower prices which is what's most important for them because the average end user of the laptop won't notice the difference between network cards but they do check the sticker price of the machine on Amazon/Walmart and make the purchasing decision on that and stuff like the CPU and GPU not on the network card in the spec sheet.
[flagged]
Because you have to over pay all those executives and shareholders.
Hardware companies are bad at making software, and the corollary, software companies are bad at making hardware.
I feel like there's an opportunity for a joke here somewhere along the lines of hardware companies being really terrible at writing software, while software companies being just a normal amount of terrible at writing software.
A few attempts with chstgpt managed it: "Hardware companies writing software is like watching a train wreck in slow motion. Software companies? They just crash at regular speed."
Which is why NVIDIA is king of the world, they are good at both hardware and software.
In the middle you have Apple that is getting better at making certain kinds of hardware, worse at some hardware and definitely worse in software.
Hardware manufacturers see software as a cost center, it’s often made as cheaply as possible. And hardware engineers aren’t necessarily good software developers. It isn’t their main expertise.
Because money
Is there any news releases or other information about that program, such as their goals, how much of the feed is merged upstream, etc?
The wording of the headline is a bit misleading here. I followed the link thinking it might be a firmware or silicon bug as I have a couple of routers at home with mt76 wifi, but was relieved to find it's just a bug in the vendor's 'sdk' shovelware. I'm baffled that anyone even thought about using that, given there's such good mt76 support from mainline kernels with hostapd.
> relieved to find it's just a bug in the vendor's 'sdk' shovelware
Vendors plural to worry about:
“…driver bundles used in products from various manufacturers, including [but not limited to] Ubiquiti, Xiaomi and Netgear.”
That said, vendors (plural) say no products use this, e.g. Ubiquiti:
https://community.ui.com/questions/CVE-2024-20017/b3f1a425-d...
Sorry, yes, my use of 'vendor' here was ambiguous. I meant Mediatek, the chipset vendor.
> I'm baffled that anyone even thought about using that, given there's such good mt76 support from mainline kernels with hostapd.
Not sure if you noticed but the OpenWRT 21.02.x series (based on mainline kernel 5.4 series) is affected, and these guys generally know their game when it comes to wireless on Linux. So much so that I think the mainline kernel mt76 driver is actually maintained by an OpenWRT developer.
Upstream OpenWrt does not use `wappd` so it should not be affected.
Interesting. The bulletin lists "OpenWrt 19.07, 21.02 (for MT6890)" as vulnerable, but OpenWRT had indeed no security advisory out for this:
https://openwrt.org/advisory/start
Maybe MediaTek has shipped some modified versions of OpenWRT using this "wappd" thing to their B2B customers (as part of the SDK perhaps?) and are now advertising those as vulnerable.
Yes, I'm assuming that's exactly why OpenWrt is mentioned but it's very misleading.
The OpenWrt folks generally have good enough taste not to ship any drivers or userspace junk from vendor SDKs, though they do have a fair-sized set of backport patches on top of the (somewhat elderly) mainline kernels they do ship.
I'm running up-to-date mainline on my routers, not OpenWrt kernels. The mt76 support in 6.11 (and previously in 6.9 and 6.10) is complete enough that I don't need to carry any patches at all over what's in Linus' tree.
Original blog: https://blog.coffinsec.com/0day/2024/08/30/exploiting-CVE-20...
The wappd service is primarily used to configure and coordinate the operations of wireless interfaces and access points using Hotspot 2.0 and related technologies. The structure of the application is a bit complex but it’s essentially composed of this network service, a set of local services which interact with the wireless interfaces on the device, and communication channels between the various components, using Unix domain sockets.
On the bright side, it doesn't sound like this is in baseband firmware but instead in a "value add" service that isn't 100% necessary to the functioning of the WNIC itself.
This reminds me of how some devices come with driver packages that include not just the actual driver software that's usually tiny and unobtrusive, but several orders of magnitude larger bloatware for features that 99% of users don't need nor want. Printers and GPUs are particularly guilty of this.
> The structure of the application is a bit complex
I've done some Android development so let me translate that for you: "layers upon layers of dog shit APIs"
I've done some Android RE and agree with you. It's basically Enterprise Java culture.
Is there some logic to MediaTek's naming conventions, or all their devices just MTxxxx where x is some incremented/random number?
I have a device with a mt6631 wifi chip and I'd assume it's unaffected just because it's not mentioned as affected anywhere, but it's hard to tell where it might fit into the lineup.
They say that OpenWrt 19.07 and 21.02 are affected, but as far as I can tell, official builds of OpenWrt only use the mt76 driver and not the Mediatek SDK.
It’s similar for Ubiquti:
https://community.ui.com/questions/CVE-2024-20017/b3f1a425-d...
There are vulnerable drivers for some chipsets used by UBNT hardware, but they have zero products that use those drivers.
I've been buying laptops with AMD CPU's but they always come with these trash MediaTek RZ616 Wi-Fi cards, why is that? I've been replacing them with Intel Wi-Fi cards, now I have a pile of RZ616 cards ready to become future microplastics :-(
Intel sells two versions of their WiFi cards: ones ending in 1 use CNVI protocol and work only with Intel chips. These are sold really cheap to OEMs; ones ending in 0 use standard PCIe and are sold to OEMs for ~$10 more.
AMD decided to brand Mediatek's MT7921 and MT7922 as RZ608 and RZ616 to have something to sell to OEMs at the same price point as Intel's xx1 chips.
Lenovo grew unhappy with MediaTek as well and started soldering down Qualcomm chips for WLAN on their AMD platforms only to be burned by buggy firmware/driver interactions on Linux (which they officially sell and support). And Qualcomm stretches themselves rather thin on the mainline kernel side once a chipset generation is no longer the latest. It takes a tremendous amount of vendor pressure to make Qualcomm do anything these days.
iwlwifi has its own set of problems, biggest being no AP mode (on 5 Ghz). Also intels firmware license is more restrictive than mediateks, and being fullmac the firmware does lot more of the heavy lifting; I personally prefer softmac more. There simply aren't that many great options out there, gone are the golden days of ath9k.
Have you tried them further than "I don't trust MediaTek"?
I've had sequentially an Intel and an AMD ThinkPad for work (I killed the first one). Turns out, the wifi is much much better on the AMD one with the MediaTek chipset than on the Intel one with the Intel chipset. On the latter, I had very frequent disconnects from the network (severals per hour) along with atrocious latency even on 5GHz. And by atrocious latency, I mean atrocious as in "it is more than noticeable when using ssh". The current one has been rock solid for the past two or three years.
So yeah, I guess it really depends. The specific chipset I have is a MT7921 and I'm running Linux, YMMV. And it also may depend on the laptop itself.
The laptops run Windows. It's not about trust, they just don't work well. They take a long time to connect, particularly after returning from sleep. Many times I have to disconnect and reconnect manually after returning from sleep. I replace them with Intel Wi-Fi ones and it just works. I'd really rather just replace them than face user complaints.
Urgh. Kudos to you, that’s no easy task. And this might explain a lot, I’ve never liked the WiFi experience in windows. I find Linux with IWD to be stable as hell
You get what you pay for.
You know why. Price.
IIRC my phone uses a MediaTek chipset. And I vaguely remember the vendor has moved away from MediaTek since because of the ahem quality of those products...
No idea how WiFi is done on a phone though. Is there a way to find out whether the phone is affected? I hardly ever use WiFi because I have unlimited cellular data and good coverage, but would still be good to know.
termux -> "sudo su" and then
(it gives an output similar to lsmod)Back in the day, shell coders would receive the "Useless Use Of Cat" award.
https://news.ycombinator.com/item?id=23341711
Today it's giving way to "useless use of su" where admins aren't aware of sudo(8) options like "-s" or "-i"
And anyone who points that out still gets to receive the "useless use of code golfing" award.
'sudo su' is pure absurd but then how name naming it a "golf" ?? :)))
So maybe Perl is not so write-only after all ? ;)
Sure, it's annoying and pedantic to nitpick someone's helpful answers and snippets. But sometimes the goal is to foster a better understanding, and reduce superstition and cargo-culting.
"sync<CR>sync<CR>sync<CR>reboot<CR>" became an idiom for various reasons. Then it became "sync; sync; sync; reboot" and deemed equivalent, which was missing the point that physically pressing "enter" introduced some human-length pauses into the process. Then "reboot(8)" incorporated those syncs, and "shutdown(8)" provided more flexibility, but the idiom persisted.
And to this day, the tier-1 support script says to clear your cookies and cache, try a different browser, factory reset your phone, disable firewall, disable AV, reboot your router, bypass your switch and plug in one device directly, wait 15 minutes and try again, and we've abandoned all attempts to understand, diagnose, or find root causes when the underlying systems are too diverse and complex to understand or keep your tech's expertise up-to-date.
Not memorizing all of sudo's flags isn't cargo culting, and neither is using cat to improve clarity.
Strawman--nobody said that. All I suggest is that if you wish to accomplish a particular task, then read the manual at that point, and find relevant ways to invoke the command.
The options I've memorized over the years are the ones I've used the most often, and this inertia can lead to ignorance, unless I periodically revisit the manual page to see what else can be done.
Every IT/CS instructor will tell you that your source of truth is the vendor's documentation. Don't waste your time Googling Stackexchange when the manual pages are available right on the system, on a website, or however. The manual pages are written by the developers and tech writers to specifically tell you how to use these commands.
You can either "cat for clarity" for the rest of your career, or you can learn new methods like shell redirects, "tee(1)", "exec <file; while read var; do cmd; done". I wouldn't be surprised if people start their careers thinking that "cat" is just for starting up a pipeline. Other students may be taught that "cat" is an elementary way to just put file contents on their terminal screen, and then they'll subsequently learn how "more(1)" is superior in this regard.
"When all you have is a hammer, everything looks like a nail."
So with termux there is an actual root password set, but it differs from the phone password so it's often forgotten.
The termux developers, knowing this, set it such that the default termix user can invoke sudo without a password.
It might seem lazy, but its very useful
I am not sure how passwords are relevant to the pointless chaining of two distinct commands, rather than invoking a straightforward "sudo -s".
People writing "sudo su" are simply imitating a common StackExchange idiom without knowing why.
"su" requires passwords unless invoked by root. "sudo" may be configured to permit/deny specific commands. So if you write that, then you're saying 'become root via the sudoers(5) config and then fork, exec, become root again via the setuid binary "su", in order to run an interactive shell.'
It's a poor habit to be promoting, because it assumes things about the configuration and suggests that "su" is equal to other particular "sudo" maintenance commands, when the point is simply to drop into a root shell, which is a facility provided directly by "sudo", if you'd only read the manual page and learn its options.
Nothing will stop you from invoking "sudo -s" without a password, without another fork/exec, without another suid utility carrying a significantly different authentication model.
Doesn't "sudo -s" keep the current user's environment variables as opposed to "sudo su"?
I admit that in the context of doing "ls /sys/module" it's likely not a huge problem, but I do (in my gut) feel that for running an elevated command, it's cleaner to just drop in as actual root, instead of masquerading as root.
> if you'd only read the manual page and learn its options.
> Depending on the security policy, the user's PATH environment variable may be modified, replaced, or passed unchanged to the program that sudo executes.
Essentially a "maybe, depending on what your OS policy is", proving that your comments are less than helpful.
Well, you are moving the goalposts like crazy, considering you were just executing a simple command with your original example, but let me search it and demonstrate the rest of it, rather than your Ctrl-F find one option in question:
https://manpages.ubuntu.com/manpages/noble/en/man8/sudo.8.ht...
I am not sure what is meant by "masquerading as root" -- effective UID rather than real UID? "sudo" should set both to the target; there is no masquerading, even if you end up in a root shell with features of the invoking user's environment, those relevant variables should've been adjusted in the process.So what you're proposing to do is to escalate privilege using "sudo"s security model and configuration, which may add, suppress, or alter environment variables, as well as SELinux and resource limits and cgroups or whatever, and then have a second go-round through "su" may alter the environment further, making for an unpredictable interaction. Hopefully it's all harmonized through PAM, but all you wanted is an interactive shell. Why try to justify this copy-paste idiom?
In fact, I could rewrite your original snippet as
Why are you even opening an interactive shell to do one simple command? If that's all you want, then learn and use the appropriate idiom for it. "sudo(8)" was originally designed to run one-off commands without invoking that shell. In fact, security experts will tell you not to leave root shells open at any time. If you can run a "sudo command" and return to your user shell, then that is best practice.Hey, thanks for the one extremely useful comment!
Thanks! I had thought about mentioning my phone is not rooted, but then I skipped it because that should be the default...
i still cannot fathom why in this day and age where people buy any silicon that's available, these C tier vendors don't adopt the PC strategy and completely open their firmwares for open source community.
FCC regulations around not making it easy to transmit outside of the licensed band tend to cause this.
It feels like they're using software as a solution to a hardware problem.
No matter what the software says, or what keys it has set, the hardware should still be hard-configured to honour regional power output limits. This could be something like a block of DIP switches under the cover, so if a user unbolts the case, finds the switch, and toggles it to some country with looser requirements, it's obviously going against manufacturer advice and washes their hands of liability.
Making the code available doesn’t necessarily mean that you can actually flash the image since it can be cryptographically locked down. Or even you support flashing but only let you do certain trusted operations from a signed image.
I feel like I'm missing something here.
Honestly, if you can't update the firmware you're in the same situation... knowing that you have a critical vulnerability and unable to fix it.
Enforcing trusted operations is definitely more work than they are going to do (if it's even possible to "do this right").
In a semi-ideal world, I would look for a vendor that permits only certain ops from a flashed image and hope that their crappy "restriction enforcing" code is also riddled with vulnerabilites so it's really just "follow the rules please".
you managed to completely miss the point.
going the pc route is fully embracing your hardware accept whatever software the user wants. not throw unbuildable source somewhere and make it impossible to use. that's the faux open source we have today when someone must comply with the gpl or something
I think you happened to miss the point about regulatory requirements that make this difficult/impossible to accomplish for the radio vendor. I think the proliferation of SDR is the only hope to change the broader regulatory culture but until that happens you're not going to see a shift.
I think it's also rich calling GPL compliance faux open source. There really is no true Scotsman.
What are those regulatory requirements, and what do they say?
Thank you.
https://apps.fcc.gov/kdb/GetAttachment.html?id=zXtrctoj6zH7o...
Manufacturers of radios have to prevent the ability to behave in a non-compliant manner. One way of accomplishing that is preventing the user from updating software to non-official versions. Another is to prevent the small subset of functionality to be updated by non-official versions. This isn't a new requirement and has been around since forever.
that point is completely bogus since hardware oscillators limit the range. and even multi range devices let the driver decide the region, so even with closed source you can already offend fcc regulations (pro tip set your wifi region to cuba for extra channels)
Hardware oscillators don't limit anything, PLL ranges and amplifier band characteristics do, and they have soft falloffs. Please stop making claims you have little knowledge around.
(no, a PLL is not considered a "hardware oscillator" in any practical sense by anyone working in this area, that's "tomatoes in a fruit salad" category)
ath9k.
> The affected versions include MediaTek SDK versions 7.4.0.1 and earlier, as well as OpenWrt 19.07 and 21.02.
> The vulnerability resides in wappd, a network daemon included in the MediaTek MT7622/MT7915 SDK and RTxxxx SoftAP driver bundle.
OpenWRT doesn't seem to use wappd though?
As a contributor to OpenWrt it makes me wonder why don't people differentiate between OpenWrt and various proprietary vendor SDKs. No one would have referenced Fedora if there was a bug in Nobara.
Came here wondering this, as I have several Netgear APs running OpenWRT on my home network. Sounds like I'm in the clear?
If it's a clean upstream OpenWRT, yes. For vendored OpenWRT, all bets are off.
That's why we need free firmware. I'm tired of Broadcom and Ralink.
Exploit is hard to distinguish between a back door here.
Posting claims of it being such is pretty easy, though.
There is a better middle ground here by saying the company that made it may not have known, but nation state threat actors most likely do.
When you see actors at this level set up manufacturing thousands of explosive filled devices at very high production quality, inserting some compromised things like printers or routers in a company network wouldn't be and shouldn't be a surprise.
If the nation state actors did intentionally backdoor it, then they would have wanted to make it look like incompetence. Here’s a link to the Simple Sabotage Field Manual from the US. It worked well in occupied Europe during WWII:
https://archive.org/details/SimpleSabotageFieldManual
Welcome back to the 90s.
Can the OP's link be changed to the original source, not the advertisement it currently links to? The exploit is documented https://blog.coffinsec.com/0day/2024/08/30/exploiting-CVE-20...
I don't think that link is necessarily better just because it's the original source. The linked article gives a concise overview, while the blog post spends the first paragraph talking about moving and starting a new job.
In general, I would wager that HN prefers intellectual curiosity over overviews. Submission guidelines infer that by stating "Please submit the original source. If a post reports on something found on another site, submit the latter."
Sure, though I'd argue in the case of vulnerabilities an overview is particularly valuable. Not everyone wants to dive into the details; in my case what I'm most interested in is whether I (or anyone else at my day job) might be affected.
I would agree. I would also say that when the secondary article contains a lot of value added above, the original, such as is the case here, the secondary source is better because it is easy to follow its link to the original if that's what you'd like to see.
I definitely agree with the guideline around favoring original sources, but this seems like a good time to deviate.
Their exploit development process is interesting, and I like to think I'd have done something similar (that is, compiling an easier-to-exploit version of the application and gradually working up to the real thing)
[flagged]
"Don't be snarky. [...] Omit internet tropes."
https://news.ycombinator.com/newsguidelines.html
Well it was solved decades ago in Java yet Java apps have proven no more secure in general.
It is a broader ecosystem problem that there almost no incentive to write secure code. Security is an afterthought like documentation.
> Java apps have proven no more secure in general
Really? I think an extraordinary claim like "eliminating a whole class of problems makes applications no more secure in general" should also come with extraordinary evidence.
I think Java's CVE list should say enough. Point being humans can muck anything up, regardless of safeguards
The point of the person you're replying to is that JVM software has far fewer vulnerabilities than it would have otherwise.
The number of CVEs reveals that there is a lot of Java software and that there's a strong culture of importing dependencies. But we also care about the nature of them, the normalized relative frequency of very serious flaws like RCE exploits.
A CVE list says nothing. I made my own language which has no CVEs, that obviously doesn't mean it's secure. The relevant metric is "CVEs per unit of functionality".
Also, popularity directly affects the number of CVEs.
This is a nonsense statement unless you note the Java runtime. Java is a language. The runtime is the software that runs the Java code. There's more than one runtime.
C, C++ and assembly are 3 languages where this regularly happens.
Can we stop with the snide comments now please? They're not helpful.
My anecdotal experience with mediatek wifi is it's a very flakey, low quality brand. That might be more of the reason. The firmware is probably unpolished, rushed, not maintained by competent people.
Not an issue with ath9k. Guess why. Hint: not Rust related.
[flagged]
[flagged]
I would like to remind people of the 2016 Adups backdoor:
> According to Kryptowire, Adups engineers would have been able to collect data such as SMS messages, call logs, contact lists, geo-location data, IMSI and IMEI identifiers, and would have been able to forcibly install other apps or execute root commands on all devices.
https://www.bleepingcomputer.com/news/security/android-adups...
How is this relevant?