Hi everyone,
I'm looking for case studies on security and free software, in particular the differences in how the world can respond to the discovery of important vulnerabilities/exploits on free software vs. proprietary software.
Any case studies on how the world dealt to react quickly and update systems in reponse to Heartbleed for instance?
How do support providers react, e.g. Red Hat, Oracle?
Feel free to forward this message to anyone who might have interesting resources on this subject.
Best, Hugo
PS: please keep me in To: if you respond
Hi Hugo,
Hugo Roy hugo@fsfe.org writes:
Any case studies on how the world dealt to react quickly and update systems in reponse to Heartbleed for instance?
I remember blackduck had some reports comparing FLOSS/non-FLOSS with respect to their security, I found this, but I’m sure there are more detailed documents:
https://info.blackducksoftware.com/rs/872-OLS-526/images/OSSAReportFINAL.pdf
Also, a bit older, but with more data: http://go.coverity.com/rs/157-LQW-289/images/2014-Coverity-Scan-Report.pdf
I’m not a specialist at all, and all these sources must be read with a grain of salt, because authors are often not neutral.
HTH,
Thank you Bastien, this is interesting and helpful.
Does anyone has interesting articles about recent vulnerabilities discovered in free software?
Best, Hugo
↪ Bastien Guerry / juillet 26, 2017 15:50:
Hi Hugo,
Hugo Roy hugo@fsfe.org writes:
Any case studies on how the world dealt to react quickly and update systems in reponse to Heartbleed for instance?
I remember blackduck had some reports comparing FLOSS/non-FLOSS with respect to their security, I found this, but I’m sure there are more detailed documents:
https://info.blackducksoftware.com/rs/872-OLS-526/images/OSSAReportFINAL.pdf
Also, a bit older, but with more data: http://go.coverity.com/rs/157-LQW-289/images/2014-Coverity-Scan-Report.pdf
I’m not a specialist at all, and all these sources must be read with a grain of salt, because authors are often not neutral.
HTH,
-- Bastien
Hi,
I don't have much to share, but suspect that these types of issues can be best solved if the following is avoided:
- Bundling.
- Customization without sending improvements to upstream.
- Reinventing the wheel.
- Containers.
This list was made based on [[https://media.libreplanet.org/u/libreplanet/m/solving-the-deployment-crisis-...]] (licensed under CC BY-SA 4.0) and [[https://wingolog.org/archives/2015/11/09/embracing-conways-law]] (no license: default copyright license).
Interestingly, the GNU Guix project ([[https://www.gnu.org/software/guix/]]) tries to avoid all the things listed so far.
Also I would add that the following must also be avoided:
- Digital handcuffs. This includes Restricted Boot, which is different from the benign Secure Boot ([[https://media.libreplanet.org/u/libby/m/embracing-secure-boot-and-rejecting-...]]). This is desirable to avoid because the user himself cannot (not "can't") update the system as he would wish to, because only the manufacturer's "trusted" operating system is accepted by the device. Even worse, the manufacturer might not even be allowed to do such updates because the carrier/front-provider might have been the only one that implemented Restricted Boot, not the manufacturer.
Hugo Roy wrote:
I'm looking for case studies on security and free software, in particular the differences in how the world can respond to the discovery of important vulnerabilities/exploits on free software vs. proprietary software.
Structurally speaking vulnerabilities in proprietary software can be kept hidden for a long time. For example, as Richard Stallman points out in https://stallman.org/apple.html, "Apple left a security hole in iTunes unfixed for 3 years after being informed about the problem. During that time, governments used that security hole to invade people's computers.".
You should read his series of webpages on both stallman.org (links to other well-known proprietors can be found at the aforementioned URL) and https://www.gnu.org/proprietary/proprietary.html which are filled with examples of proprietary software malware and the reactions by the relevant proprietors.
Typically the reactions from proprietors are not good until there's publicity about the insecurity. Embarrassment tends to push proprietors into acting to fix the problem (the word "problem" as viewed from a user's standpoint, of course). They've reacted by promoting "responsible disclosure": a way to get people to work on proprietors' behalf (as though the public works for the proprietor) by disclosing vulnerabilities (sometimes for rewards) and remaining silent about the vulnerabilities until the proprietor discloses the vulnerability. This is an entirely social pressure to conform -- one is under no obligation to work with a developer in this way. Free software hackers can take a different approach: they can illustrate the vulnerability with an exploit, publish a patch, and disclose all at the same time and do so in a timely manner. This different approach is possible precisely because users have the freedoms of free software.
But there's always a chance that the security problem is not a bug (when viewed from the proprietor's standpoint). Some proprietors work with other agencies (both governmental and private) to ensure there are ways to remotely investigate or control what a user's computer is doing. In non-free software whether such a backdoor or service was intentional or accidental almost doesn't matter because either way (as is always the case with non-free software) users don't have the freedom to fix the issue, share the fix with others to help their community, and run the fixed software (even if they can identify the problem and fix it).
Cryptographically signed software is one way of effectively preventing users from running improved variants of copies of the software they already have, for example. This too plays a big role in your research because users can find themselves stuck with a huge vulnerability they can't do anything about besides picking another computer. As I understand it, Intel systems have a backdoor pitched as a sysadmin convenience (called "Intel Active Management Technology") which users can only use through Intel's interface. Users can't control Intel AMT or replace it because it is cryptographically signed proprietary software. The cryptographic signature is checked at every boot and if the check fails the system won't stay up and running for long. It wouldn't surprise me if AMD has something comparable to Intel's Active Management Technology with the same restrictions: proprietary software, cryptographically signed and checked on boot, users are disallowed from uninstalling AMD's key and installing their own (so only the user's code runs), and a signature failure result in a non-functional system.
Most recently some tracker ("cell phone"/"mobile phone" names hardly do justice to properly express the situation) problems came to light regarding Android improperly restricting application access. As a result, according to https://arstechnica.com/information-technology/2017/07/stealthy-google-play-... apps could:
Record calls Record VOIP Record from the device microphone Monitor the device's location Take screenshots Take photos with the device camera(s) Fetch device information and files Fetch user information (contacts, call logs, SMS, application-specific data)
and two other apps which "had received 100,000 to 500,000 downloads" also copied text messages and sent those copies to someplace without the text message author's approval or consent.
Finally, not that you said "intellectual property", but this is sure to come up in your research -- the limits imposed by patents, copyrights, and trademarks are all relevant but quite different (such as the reason people are upset with systemd, versus the problems with Mono, and the reason why Mozilla apps had different names in some GNU/Linux distributions). Lumping these and other laws together as "intellectual property" conveys ignorance of these differences. https://www.gnu.org/philosophy/words-to-avoid.html#IntellectualProperty has more.