Free Software security vulnerabilities: Heartbleed and other case studies?

J.B. Nicholson jbn at forestfield.org
Sun Jul 30 21:10:05 UTC 2017


Hugo Roy wrote:
> I'm looking for case studies on security and free software, in
> particular the differences in how the world can respond to the discovery
> of important vulnerabilities/exploits on free software vs. proprietary
> software.

Structurally speaking vulnerabilities in proprietary software can be kept 
hidden for a long time. For example, as Richard Stallman points out in 
https://stallman.org/apple.html, "Apple left a security hole in iTunes 
unfixed for 3 years after being informed about the problem. During that 
time, governments used that security hole to invade people's computers.".

You should read his series of webpages on both stallman.org (links to other 
well-known proprietors can be found at the aforementioned URL) and 
https://www.gnu.org/proprietary/proprietary.html which are filled with 
examples of proprietary software malware and the reactions by the relevant 
proprietors.

Typically the reactions from proprietors are not good until there's 
publicity about the insecurity. Embarrassment tends to push proprietors 
into acting to fix the problem (the word "problem" as viewed from a user's 
standpoint, of course). They've reacted by promoting "responsible 
disclosure": a way to get people to work on proprietors' behalf (as though 
the public works for the proprietor) by disclosing vulnerabilities 
(sometimes for rewards) and remaining silent about the vulnerabilities 
until the proprietor discloses the vulnerability. This is an entirely 
social pressure to conform -- one is under no obligation to work with a 
developer in this way. Free software hackers can take a different approach: 
they can illustrate the vulnerability with an exploit, publish a patch, and 
disclose all at the same time and do so in a timely manner. This different 
approach is possible precisely because users have the freedoms of free 
software.

But there's always a chance that the security problem is not a bug (when 
viewed from the proprietor's standpoint). Some proprietors work with other 
agencies (both governmental and private) to ensure there are ways to 
remotely investigate or control what a user's computer is doing. In 
non-free software whether such a backdoor or service was intentional or 
accidental almost doesn't matter because either way (as is always the case 
with non-free software) users don't have the freedom to fix the issue, 
share the fix with others to help their community, and run the fixed 
software (even if they can identify the problem and fix it).

Cryptographically signed software is one way of effectively preventing 
users from running improved variants of copies of the software they already 
have, for example. This too plays a big role in your research because users 
can find themselves stuck with a huge vulnerability they can't do anything 
about besides picking another computer. As I understand it, Intel systems 
have a backdoor pitched as a sysadmin convenience (called "Intel Active 
Management Technology") which users can only use through Intel's interface. 
Users can't control Intel AMT or replace it because it is cryptographically 
signed proprietary software. The cryptographic signature is checked at 
every boot and if the check fails the system won't stay up and running for 
long. It wouldn't surprise me if AMD has something comparable to Intel's 
Active Management Technology with the same restrictions: proprietary 
software, cryptographically signed and checked on boot, users are 
disallowed from uninstalling AMD's key and installing their own (so only 
the user's code runs), and a signature failure result in a non-functional 
system.

Most recently some tracker ("cell phone"/"mobile phone" names hardly do 
justice to properly express the situation) problems came to light regarding 
Android improperly restricting application access. As a result, according 
to 
https://arstechnica.com/information-technology/2017/07/stealthy-google-play-apps-recorded-calls-and-stole-e-mails-and-texts/ 
apps could:

     Record calls
     Record VOIP
     Record from the device microphone
     Monitor the device's location
     Take screenshots
     Take photos with the device camera(s)
     Fetch device information and files
     Fetch user information (contacts, call logs, SMS, application-specific 
data)

and two other apps which "had received 100,000 to 500,000 downloads" also 
copied text messages and sent those copies to someplace without the text 
message author's approval or consent.

Finally, not that you said "intellectual property", but this is sure to 
come up in your research -- the limits imposed by patents, copyrights, and 
trademarks are all relevant but quite different (such as the reason people 
are upset with systemd, versus the problems with Mono, and the reason why 
Mozilla apps had different names in some GNU/Linux distributions). Lumping 
these and other laws together as "intellectual property" conveys ignorance 
of these differences. 
https://www.gnu.org/philosophy/words-to-avoid.html#IntellectualProperty has 
more.




More information about the Discussion mailing list