A. Contribution

  1. Problem addressed by the paper

Privacy disclosure detection system that incorporate user input in its detection coverage.

  1. Solution proposed in the paper. Why is it better than previous work?

Previous disclosure detection systems are API based which usually protected by access permission requirement. This paper suggests new method to cover user input which is not protected by access permission. Because user input might reveal sensitive data.

  1. The major results.

By testing 16,000 popular Android apps, it can achieve an average precision of 97.3% and an average recall of 97.3% for sensitive user input identification. SUPOR finds 355 apps with privacy disclosures with false positive rate 8.7%.

B. Basic idea and approach. How does the solution work?

SUPOR first check the layout files of an app. Then using a database of sensitive keywords it will find sensitive user input fields by associating the labels. Then it uses geometry based analysis to further categorize the sensitive input field. Then it is combined with off-the-shelf static analysis tool to identify sensitive user input in 16,000 popular Android apps. Unlike UI-Picker, SUPOR only selects text-labels that are physically close to the input fields, while mimicking how users look at the UI.

SUPOR-_Precise_and_Scalable_Sensitive_User_Input_Detection_for_Android_Apps_pdf

C. Strengths

  1. Its detection coverage is broader than previous works.

D. Weaknesses

  1. It will not detect apps that use non-standard UI layout. This might be done by big developers who want their users to have similar experience in their multi-platform apps.
  2. It will not detect apps which are developed by non-English speakers. They might name the functions and variables inside their apps code in non-English language.
  3. This paper is less useful for end users since it only detects all privacy disclosures. It does not distinguish legitimate privacy disclosures from suspicious privacy disclosures like other paper: Checking More and Alerting Less: Detecting Privacy Leakages via Enhanced Data-flow Analysis and Peer Voting (NDSS’15). Some privacy disclosures might be legitimate to support users’ tasks.
  4. This system is only useful for Apps Marketplace provider such as Google Play. It is less useful for end users because the deployment for general users would not be easy.

E. Future work, Open issues, possible improvements

  1. It should be developed further to distinguish legitimate privacy disclosures from suspicious privacy leakages. It could incorporate peer voting mechanism from AAPL (Checking More Alerting Less, NDSS’15) or other method.
  2. False positives and false negatives could still be minimized by adjusting the system after learning several things that cause false positives and false negatives.
  3. It could also be used to analyze further which apps need sensitive data legitimately. However, these apps do not protect these sensitive data with encryption. Therefore, these data could be intercepted by an attacker on the way.