Apple will have to become more aggressive in the way it controls the privacy promises developers make when they sell apps in the App Store. What can corporate users do to protect themselves and their users while they wait?
What is the problem?
Some developers continue to abuse the spirit of Apple’s App Store privacy guidelines. This extends to posting misleading information on app privacy labels, as well as outright breaking promises not to track devices. Some developers continue to ignore no-tracking requests to exfiltrate device tracking information.
The Washington post, who recently launched its own digital ad network, identified several cases in which Malicious App Store Apps Fail to Keep User Privacy Promise.
When a user says they don’t want an app to follow them, the app must honor that request. But the report cites numerous instances where apps continue to harvest the same information, regardless of user requests. This data may be sold to third-party data tracking companies or used to provide targeted advertising, according to the report. What it doesn’t say is that disrespecting users’ wishes is a betrayal of trust.
What could help?
The To post spoke to former iCloud engineer Johnny Lin, who maintains that, âWhen it comes to shutting down third-party trackers, App Tracking Transparency is a dud. Worse yet, giving users the option of pressing a “Ask app not to track” button can even give users a false sense of privacy.
This is a harsh criticism and it seems appropriate to observe that Lin has an interest here. His company is developing Confinement, which blocks âtracking, ads and malwareâ in all apps, not just Safari. Maybe Apple should take the same approach. But given the months of hindsight faced by the company when it introduced application tracking transparency, Apple-wide, it will take time to achieve this. Surveillance capitalism has a lot of money to spend opposing such plans; as it is, users, especially corporate users, should take steps to protect themselves.
We need education
Another approach is education. Whenever we see privacy issues popping up, we also seem to be faced with claims that a number of these malicious apps come in the form of bite-sized entertainment titles intended for occasional players and children.
Of course, an app that actively retrieves data doesn’t care if the parent installed the app, or if it was the parent’s child on a borrowed smartphone.
Users really need to learn to exercise good judgment about the apps they use. When it comes to kid-based plague power, I’d say the safest approach will be to use Apple Arcade and let your kids play whatever they want from there. It is not ideal, but it is a way of limiting the risks.
Adopt (but verify) gray computer applications
A third approach that should work is policy development. Businesses should take a close look at the apps that employees use on their devices to make sure they’re adhering to security policy.
The use of MDM systems and managed Apple IDs for the business side is expected to increase, while companies should really work closely with employees to identify the applications they are using. Many companies now have a problem with gray computing, the applications and services that employees use to do their jobs simply because these systems perform better than the tools provided by the company. In most cases, the ban does not work.
A better approach is to identify these applications and compare them to the company’s security policy and transparently explain why some cannot be used. This needs to be paired with work to ensure that your own apps are at least as easy to use as gray market alternatives. This activated approach improves the personal autonomy of your teams much more effectively than autocratic diktats. The idea is that by working as a team, you end up with a more secure space. You can supplement this with classic MDM solutions.
But what will make the biggest difference is the police. Apple has already said it will work with developers who break the privacy promise, but it may need to toughen up that approach. I would say it should proactively monitor all apps against the privacy promises they make to make sure they live up to those promises.
Those that do not should be deleted.
It is also not sufficient to monitor only specific applications identified by external parties. If a developer is found to be abusing an app’s privacy, all of their apps should be verified.
Savvy consumers and security researchers can help by using applications such as little snitch, Confinement, Giant, Disconnect me, and a host of others to monitor activity generated by applications. If an app promises privacy, it needs to be held accountable, and one way to do that is to use apps like these to monitor privacy leaks and notify Apple when you identify an app that is leaking data without it. your authorization.
This approach – of educating yourself about the risks, working with your internal groups (family, employees, children) to manage and minimize the risk, and aggressively attempting to identify apps that break their privacy promise – should help make the environment more difficult for such attacks.
What could happen next
In my opinion, privacy fraud is just as serious as any other type of fraud. Apple already monitors its applications for fraudulent behavior and last year 150,000 applications were rejected for spamming, copying or misleading users.
Now he has to do the same for privacy cheats.
Copyright Â© 2021 IDG Communications, Inc.