Pegasus is military-grade spyware sold at incredible expense to nation-states, governments, and agencies, ostensibly to fight crime and terrorism, but reports say it’s being abused by authoritarian regimes against journalists and dissidents.
It can be implanted onto iPhones and Android phones by targeted one-click social engineering attacks like spear-phishing, or more recently, no-click payloads in messages. At which point it’s… spyware on.
Now, Pegasus isn’t known to have been used on phones registered on U.S. cellular networks, though it may have been used on U.S. citizens with phones registered on networks in other countries. (Think of Pegasus as outsourced signal intelligence — The U.S. and other world powers have their own, so don’t need it, and Pegasus being used to surveil their citizens wouldn’t be taken kindly.)
Also, the vast, vast majority of people reading this right now simply aren’t worth the time or expense required for it to be deployed against us. Sorry, but relatively speaking, we’re boring. Still, it is 100% absolutely positively worth being as informed as possible about it. Because, beyond you or me, it might not just be a tool for law enforcement but a weapon against privacy and freedom, like a James Bond movie as written by Edward Snowdon.
What is Pegasus, and why is it in the news?
Pegasus is spyware that’s maintained and licensed by a company called NSO Group to nation-states and used by the operatives of those nation-states to extract information from iPhones and Android phones and to track and monitor the people using them.
Amnesty International and Forbidden Stories, working with a consortium of over a dozen world news outlets including The Washington Post and The Guardian, released a series of coordinated reports over the weekend, basically accusing NSO of being less than forthright about who exactly is using their Pegasus spyware, and how much it’s really being used. In other words, they’re handing out cyber guns without really checking cyber IDs or running basic background checks. And maybe not just by the hundreds or thousands, but by the tens of thousands.
In an NSO statement, the company claims their spyware has been licensed by 60 undisclosed intelligence, military, and law enforcement agencies in 40 countries to prevent terrorist attacks, including bombings and drug and sex trafficking rings. In other words, they’re heroes; they get approval from the state of Israel for all their sales, so just get all the way off their backs about it, ok?
But… the report claims NSO spyware is also being used by authoritarian regimes to target business executives, activists, journalists, politicians, diplomats, military and civilian agencies, and even heads of state, primarily in Mexico and the Middle East, but also India, Pakistan, and surrounding areas, and France, among other places. To expose sources, counter-campaign strategies, and track, detain, even murder dissidents.
NSO says they don’t operate the spyware for their clients, do not have regular access to the data, and terminate the contracts of any clients found to be abusing the spyware. NSO also says that it’s technologically impossible for Pegasus to be used on U.S. phones and that the whole report is exaggerated, misleading, spurious, and just basically completely suss. This despite multiple independent investigations by security and academic groups working with the consortium.
Again, not something almost any of us has to worry about personally, but something all of us should be wary about globally and geopolitically.
What does this have to do with Apple?
Well, it has to do with Apple and Google because the Pegasus Spyware is being deployed on iPhones and Android phones. They’re our most personal devices, the ones that know the most about us, the ones that contain all of our private data and handle all of our private communication, and they also happen to have cameras and mics built-in, so they’re the biggest target for attacks like Pegasus.
This is how that works: A nation state or agency thereof contracts with NSO for a license to use Pegasus, just like you might get a license from Adobe to use Photoshop or any software-as-a-service.
Then, the Pegasus attacker identifies a high-value target and sends them a link through a messaging app like iMessage, WhatsApp, Signal, Messaging — could be anything. The message is designed specifically for the target and crafted in a way to entice them to click on it… which initiates the infection. That’s typically known as spear-phishing.
Spear because they’re sniping specific targets, not trawling with nets for any and every possible target. They don’t want to catch a lot of people. They don’t want a botnet or ransomware or anything that gets attention and increases the risk of discovery. That would result in their exploits being identified and fixed. No, they want to catch only very specific people. So the exploits they paid their small fortunes for don’t get burned and patched anywhere near as quickly.
More recently, Pegasus has also been deployed as zero-click messages. Meaning the target doesn’t even have to be tricked into clicking on a link. They just have to receive the message.
A message that contains something the app can’t properly parse or handle, something malformed or overflowing that exploits a bug and lets its spyware payload spill out of whatever protections the app provides and into the operating system.
It’s not strictly limited to messages either. They can also try to trick you into visiting a website that has the specially crafted link or payload and catch you that way.
In response to the report, Apple sent several outlets and me the following statement. Interestingly, it didn’t come from the PR team but from Ivan Krstic, who runs Security Engineering and Architecture, and has given detailed talks at Black Hat several times over the last few years:
Apple unequivocally condemns cyberattacks against journalists, human rights activists, and others seeking to make the world a better place. For over a decade, Apple has led the industry in security innovation, and, as a result, security researchers agree iPhone is the safest, most secure consumer mobile device on the market. Attacks like the ones described are highly sophisticated, cost millions of dollars to develop, often have a short shelf life, and are used to target specific individuals. While that means they are not a threat to the overwhelming majority of our users, we continue to work tirelessly to defend all our customers, and we are constantly adding new protections for their devices and data.
Why can’t Apple just fix it?
Apple and Google can and will fix any and all bugs they come across, including these, as fast as possible. Unfortunately, it doesn’t sound like the consortium involved saw fit to disclose their findings to Apple or Google much, much earlier, so this specific version of Pegasus could maybe have been patched much, much earlier. I mean, I wouldn’t be surprised if they gave their web and video production teams earlier and better notice about it than they have Apple and Google. Which, if true, personally, to me, is just so beyond gross.
Now, this is incredibly important work, it’s terrific that it’s been done, and I don’t believe reporters are under any obligation to disclose. But that’s what ethical security researchers would have done. The coverage would have been blockbuster regardless, but being able to say, “we shared this information with Apple and Google, and they patched the bug in previous updates, now let us tell you all about it” would not only have made for a much better story, at least in my opinion, but it would have burned the NSO exploits sooner, forced them to spend more money and use up more exploits to keep their spyware going, and potentially protected a lot of people in the meantime. Win for everyone.
I’d honest to Megatron all-caps love to know what they were thinking, or not thinking, by not disclosing it until now. Because one of the biggest dangers in reporting on malware is the temptation to sensationalize it for attention, to monetize the fear and paranoia of your audience, which just turns the reporting into another type of malware.
Why can’t Apple stop spyware like Pegasus from even happening?
Source: Christine Romero-Chan / iMore
The short answer is there’s no such thing as perfect code. Not even from NASA anymore. Systems and feature sets are so large and complicated, and there are so many of them that bugs are inevitable.
The vast majority of those bugs are harmless if annoying. Glitches or freezes or crashes. But others can be chained together to make an exploit. That’s how jailbreaks work.
It can take a long time, require a lot of people, or in the case of Pegasus and other tools used by nation states, massive amount of resources, including money.
Partially because ethical security researchers disclose those bugs and exploits to Apple, Google, and other platforms so they can fix them and protect us, the users. That leaves fewer bugs for the less-than-ethical people to sell to directly or to companies like NSO.
Apple and others also have bug bounty programs, and while they can’t outbid nation states willing and able to pay almost anything, they can pay enough that it encourages a lot of researchers to stay ethical.
Either way, bugs are going to happen, and nation states and those that sell to nation states can afford to get them, and all we can judge companies on is how fast and well they fix bugs when they’re found.
Not just the platforms infected by them but the infrastructure hosting and deploying them — Including Amazon, which just announced they’re shutting down the infrastructure being used by NSO for the Pegasus Spyware.
What about removing images, links, and other potential attack vectors from messaging?
Yeah, this is why we can’t have nice things. Every feature adds to the value of an app or device but also to the complexity and potential bugs in that app or device. Messaging apps could remove support for images, links, emoji, Unicode, everything that makes a modern messaging app a modern messaging app, but it would also trash the usefulness of messaging apps for the vast majority of people.
Also, the attacks would just move on to other vectors like webpages, app downloads, mail, USB devices, and so on.
It’s like saying if there was no bank, no one would want to rob it. TRUE… but it would be super annoying, really an inconvenience not to have banks. Like it would not to have any features on our iPhones or Android phones.
What Apple, Google, and other companies can and are doing is continuing to harden iOS and Android to make it more difficult, time-consuming, and expensive to weaponize any exploits they find or purchase.
Apple’s putting Pointer Authentication Codes in silicon and created BlastDoor to prevent a lot of the previous types of iMessage attacks from getting through. Google has Project Zero, which tries to find and report bugs before they can be weaponized. And that’s just the tip of the offense and defense security response iceberg.
It’s still a cat and mouse game, but they’re all playing to win.
What can you do if you think your phone has been infected with Pegasus?
If you really, seriously, zero-paranoia think you’re a high value, high-risk target of Pegasus spyware based on who you are and what you do, like a specific target, there’s a Mobile Verification Toolkit you can use to detect it on iPhones and try to detect it on Android phones. Because it’s much more difficult to detect on Android phones.
It’s command-line only at the moment, but hopefully, that’ll change soon. Because of that, I’ll link to the extremely nerdy process in the description.
Also, while a lot of exploits simply can’t persist after a reboot on the iPhone, it’s currently unclear to me whether Pegasus can, either immediately or through pre- and post-reboot processes. It might be complicated, which is might be why both successful and unsuccessful attacks are claimed to have been found. So, if you think you are infected, your safest bet at that point is probably scorched earth. Or at least scorched phone. Burn it down and start over with a fresh device.
That way, you’re absolutely sure.