August 15, 2025
by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
schneier@schneier.com
https://www.schneier.com
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram’s web page.
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- Report from the Cambridge Cybercrime Conference
- Hacking Trains
- Security Vulnerabilities in ICEBlock
- New Mobile Phone Forensics Tool
- Another Supply Chain Vulnerability
- “Encryption Backdoors and the Fourth Amendment”
- Google Sues the Badbox Botnet Operators
- How the Solid Protocol Restores Digital Agency
- Subliminal Learning in AIs
- Microsoft SharePoint Zero-Day
- That Time Tom Lehrer Pranked the NSA
- Aeroflot Hacked
- Measuring the Attack/Defense Balance
- Cheating on Quantum Computing Benchmarks
- Spying on People Through Airportr Luggage Delivery Service
- First Sentencing in Scheme to Help North Koreans Infiltrate US Companies
- Surveilling Your Children with AirTags
- The Semiconductor Industry and Regulatory Compliance
- China Accuses Nvidia of Putting Backdoors into Their Chips
- Google Project Zero Changes Its Disclosure Policy
- Automatic License Plate Readers Are Coming to Schools
- The “Incriminating Video” Scam
- SIGINT During World War II
- AI Applications in Cybersecurity
- LLM Coding Integrity Breach
Report from the Cambridge Cybercrime Conference
[2025.07.14] The Cambridge Cybercrime Conference was held on 23 June. Summaries of the presentations are here.
Hacking Trains
[2025.07.16] Seems like an old system system that predates any care about security:
The flaw has to do with the protocol used in a train system known as the End-of-Train and Head-of-Train. A Flashing Rear End Device (FRED), also known as an End-of-Train (EOT) device, is attached to the back of a train and sends data via radio signals to a corresponding device in the locomotive called the Head-of-Train (HOT). Commands can also be sent to the FRED to apply the brakes at the rear of the train.
These devices were first installed in the 1980s as a replacement for caboose cars, and unfortunately, they lack encryption and authentication protocols. Instead, the current system uses data packets sent between the front and back of a train that include a simple BCH checksum to detect errors or interference. But now, the CISA is warning that someone using a software-defined radio could potentially send fake data packets and interfere with train operations.
Security Vulnerabilities in ICEBlock
[2025.07.17] The ICEBlock tool has vulnerabilities:
The developer of ICEBlock, an iOS app for anonymously reporting sightings of US Immigration and Customs Enforcement (ICE) officials, promises that it “ensures user privacy by storing no personal data.” But that claim has come under scrutiny. ICEBlock creator Joshua Aaron has been accused of making false promises regarding user anonymity and privacy, being “misguided” about the privacy offered by iOS, and of being an Apple fanboy. The issue isn’t what ICEBlock stores. It’s about what it could accidentally reveal through its tight integration with iOS.
New Mobile Phone Forensics Tool
[2025.07.18] The Chinese have a new tool called Massistant.
- Massistant is the presumed successor to Chinese forensics tool, “MFSocket”, reported in 2019 and attributed to publicly traded cybersecurity company, Meiya Pico.
- The forensics tool works in tandem with a corresponding desktop software.
- Massistant gains access to device GPS location data, SMS messages, images, audio, contacts and phone services.
- Meiya Pico maintains partnerships with domestic and international law enforcement partners, both as a surveillance hardware and software provider, as well as through training programs for law enforcement personnel.
From a news article:
The good news, per Balaam, is that Massistant leaves evidence of its compromise on the seized device, meaning users can potentially identify and delete the malware, either because the hacking tool appears as an app, or can be found and deleted using more sophisticated tools such as the Android Debug Bridge, a command line tool that lets a user connect to a device through their computer.
The bad news is that at the time of installing Massistant, the damage is done, and authorities already have the person’s data.
Slashdot thread.
Another Supply Chain Vulnerability
[2025.07.21] ProPublica is reporting:
Microsoft is using engineers in China to help maintain the Defense Department’s computer systems—with minimal supervision by U.S. personnel—leaving some of the nation’s most sensitive data vulnerable to hacking from its leading cyber adversary, a ProPublica investigation has found.
The arrangement, which was critical to Microsoft winning the federal government’s cloud computing business a decade ago, relies on U.S. citizens with security clearances to oversee the work and serve as a barrier against espionage and sabotage.
But these workers, known as “digital escorts,” often lack the technical expertise to police foreign engineers with far more advanced skills, ProPublica found. Some are former military personnel with little coding experience who are paid barely more than minimum wage for the work.
This sounds bad, but it’s the way the digital world works. Everything we do is international, deeply international. Making anything US-only is hard, and often infeasible.
EDITED TO ADD: Microsoft has stopped the practice.
“Encryption Backdoors and the Fourth Amendment”
[2025.07.22] Law journal article that looks at the Dual_EC_PRNG backdoor from a US constitutional perspective:
Abstract: The National Security Agency (NSA) reportedly paid and pressured technology companies to trick their customers into using vulnerable encryption products. This Article examines whether any of three theories removed the Fourth Amendment’s requirement that this be reasonable. The first is that a challenge to the encryption backdoor might fail for want of a search or seizure. The Article rejects this both because the Amendment reaches some vulnerabilities apart from the searches and seizures they enable and because the creation of this vulnerability was itself a search or seizure. The second is that the role of the technology companies might have brought this backdoor within the private-search doctrine. The Article criticizes the doctrine particularly its origins in Burdeau v. McDowelland argues that if it ever should apply, it should not here. The last is that the customers might have waived their Fourth Amendment rights under the third-party doctrine. The Article rejects this both because the customers were not on notice of the backdoor and because historical understandings of the Amendment would not have tolerated it. The Article concludes that none of these theories removed the Amendment’s reasonableness requirement.
Google Sues the Badbox Botnet Operators
[2025.07.23] It will be interesting to watch what will come of this private lawsuit:
Google on Thursday announced filing a lawsuit against the operators of the Badbox 2.0 botnet, which has ensnared more than 10 million devices running Android open source software.
These devices lack Google’s security protections, and the perpetrators pre-installed the Badbox 2.0 malware on them, to create a backdoor and abuse them for large-scale fraud and other illicit schemes.
This reminds me of Meta’s lawauit against Pegasus over its hack-for-hire software (which I wrote about here.) It’s a private company stepping into a regulatory void left by governments.
Slashdot thread.
How the Solid Protocol Restores Digital Agency
[2025.07.24] The current state of digital identity is a mess. Your personal information is scattered across hundreds of locations: social media companies, IoT companies, government agencies, websites you have accounts on, and data brokers you’ve never heard of. These entities collect, store, and trade your data, often without your knowledge or consent. It’s both redundant and inconsistent. You have hundreds, maybe thousands, of fragmented digital profiles that often contain contradictory or logically impossible information. Each serves its own purpose, yet there is no central override and control to serve you—as the identity owner.
We’re used to the massive security failures resulting from all of this data under the control of so many different entities. Years of privacy breaches have resulted in a multitude of laws—in US states, in the EU, elsewhere—and calls for even more stringent protections. But while these laws attempt to protect data confidentiality, there is nothing to protect data integrity.
In this context, data integrity refers to its accuracy, consistency, and reliability…throughout its lifecycle. It means ensuring that data is not only accurately recorded but also remains logically consistent across systems, is up-to-date, and can be verified as authentic. When data lacks integrity, it can contain contradictions, errors, or outdated information—problems that can have serious real-world consequences.
Without data integrity, someone could classify you as a teenager while simultaneously attributing to you three teenage children: a biological impossibility. What’s worse, you have no visibility into the data profiles assigned to your identity, no mechanism to correct errors, and no authoritative way to update your information across all platforms where it resides.
Integrity breaches don’t get the same attention that confidentiality breaches do, but the picture isn’t pretty. A 2017 write-up in The Atlantic found error rates exceeding 50% in some categories of personal information. A 2019 audit of data brokers found at least 40% of data broker sourced user attributes are “not at all” accurate. In 2022, the Consumer Financial Protection Bureau documented thousands of cases where consumers were denied housing, employment, or financial services based on logically impossible data combinations in their profiles. Similarly, the National Consumer Law Center report called “Digital Denials” showed inaccuracies in tenant screening data that blocked people from housing.
And integrity breaches can have significant effects on our lives. In one 2024 British case, two companies blamed each other for the faulty debt information that caused catastrophic financial consequences for an innocent victim. Breonna Taylor was killed in 2020 during a police raid on her apartment in Louisville, Kentucky, when officers executed a “no-knock” warrant on the wrong house based on bad data. They had faulty intelligence connecting her address to a suspect who actually lived elsewhere.
In some instances, we have rights to view our data, and in others, rights to correct it, but these sorts of solutions have only limited value. When journalist Julia Angwin attempted to correct her information across major data brokers for her book Dragnet Nation, she found that even after submitting corrections through official channels, a significant number of errors reappeared within six months.
In some instances, we have the right to delete our data, but—again—this only has limited value. Some data processing is legally required, and some is necessary for services we truly want and need.
Our focus needs to shift from the binary choice of either concealing our data entirely or surrendering all control over it. Instead, we need solutions that prioritize integrity in ways that balance privacy with the benefits of data sharing.
It’s not as if we haven’t made progress in better ways to manage online identity. Over the years, numerous trustworthy systems have been developed that could solve many of these problems. For example, imagine digital verification that works like a locked mobile phone—it works when you’re the one who can unlock and use it, but not if someone else grabs it from you. Or consider a storage device that holds all your credentials, like your driver’s license, professional certifications, and healthcare information, and lets you selectively share one without giving away everything at once. Imagine being able to share just a single cell in a table or a specific field in a file. These technologies already exist, and they could let you securely prove specific facts about yourself without surrendering control of your whole identity. This isn’t just theoretically better than traditional usernames and passwords; the technologies represent a fundamental shift in how we think about digital trust and verification.
Standards to do all these things emerged during the Web 2.0 era. We mostly haven’t used them because platform companies have been more interested in building barriers around user data and identity. They’ve used control of user identity as a key to market dominance and monetization. They’ve treated data as a corporate asset, and resisted open standards that would democratize data ownership and access. Closed, proprietary systems have better served their purposes.
There is another way. The Solid protocol, invented by Sir Tim Berners-Lee, represents a radical reimagining of how data operates online. Solid stands for “SOcial LInked Data.” At its core, it decouples data from applications by storing personal information in user-controlled “data wallets”: secure, personal data stores that users can host anywhere they choose. Applications can access specific data within these wallets, but users maintain ownership and control.
Solid is more than distributed data storage. This architecture inverts the current data ownership model. Instead of companies owning user data, users maintain a single source of truth for their personal information. It integrates and extends all those established identity standards and technologies mentioned earlier, and forms a comprehensive stack that places personal identity at the architectural center.
This identity-first paradigm means that every digital interaction begins with the authenticated individual who maintains control over their data. Applications become interchangeable views into user-owned data, rather than data silos themselves. This enables unprecedented interoperability, as services can securely access precisely the information they need while respecting user-defined boundaries.
Solid ensures that user intentions are transparently expressed and reliably enforced across the entire ecosystem. Instead of each application implementing its own custom authorization logic and access controls, Solid establishes a standardized declarative approach where permissions are explicitly defined through control lists or policies attached to resources. Users can specify who has access to what data with granular precision, using simple statements like “Alice can read this document” or “Bob can write to this folder.” These permission rules remain consistent, regardless of which application is accessing the data, eliminating the fragmentation and unpredictability of traditional authorization systems.
This architectural shift decouples applications from data infrastructure. Unlike Web 2.0 platforms like Facebook, which require massive back-end systems to store, process, and monetize user data, Solid applications can be lightweight and focused solely on functionality. Developers no longer need to build and maintain extensive data storage systems, surveillance infrastructure, or analytics pipelines. Instead, they can build specialized tools that request access to specific data in users’ wallets, with the heavy lifting of data storage and access control handled by the protocol itself.
Let’s take healthcare as an example. The current system forces patients to spread pieces of their medical history across countless proprietary databases controlled by insurance companies, hospital networks, and electronic health record vendors. Patients frustratingly become a patchwork rather than a person, because they often can’t access their own complete medical history, let alone correct mistakes. Meanwhile, those third-party databases suffer regular breaches. The Solid protocol enables a fundamentally different approach. Patients maintain their own comprehensive medical record, with data cryptographically signed by trusted providers, in their own data wallet. When visiting a new healthcare provider, patients can arrive with their complete, verifiable medical history rather than starting from zero or waiting for bureaucratic record transfers.
When a patient needs to see a specialist, they can grant temporary, specific access to relevant portions of their medical history. For example, a patient referred to a cardiologist could share only cardiac-related records and essential background information. Or, on the flip side, the patient can share new and rich sources of related data to the specialist, like health and nutrition data. The specialist, in turn, can add their findings and treatment recommendations directly to the patient’s wallet, with a cryptographic signature verifying medical credentials. This process eliminates dangerous information gaps while ensuring that patients maintain an appropriate role in who sees what about them and why.
When a patient—doctor relationship ends, the patient retains all records generated during that relationship—unlike today’s system where changing providers often means losing access to one’s historical records. The departing doctor’s signed contributions remain verifiable parts of the medical history, but they no longer have direct access to the patient’s wallet without explicit permission.
For insurance claims, patients can provide temporary, auditable access to specific information needed for processing—no more and no less. Insurance companies receive verified data directly relevant to claims but should not be expected to have uncontrolled hidden comprehensive profiles or retain information longer than safe under privacy regulations. This approach dramatically reduces unauthorized data use, risk of breaches (privacy and integrity), and administrative costs.
Perhaps most transformatively, this architecture enables patients to selectively participate in medical research while maintaining privacy. They could contribute anonymized or personalized data to studies matching their interests or conditions, with granular control over what information is shared and for how long. Researchers could gain access to larger, more diverse datasets while participants would maintain control over their information—creating a proper ethical model for advancing medical knowledge.
The implications extend far beyond healthcare. In financial services, customers could maintain verified transaction histories and creditworthiness credentials independently of credit bureaus. In education, students could collect verified credentials and portfolios that they truly own rather than relying on institutions’ siloed records. In employment, workers could maintain portable professional histories with verified credentials from past employers. In each case, Solid enables individuals to be the masters of their own data while allowing verification and selective sharing.
The economics of Web 2.0 pushed us toward centralized platforms and surveillance capitalism, but there has always been a better way. Solid brings different pieces together into a cohesive whole that enables the identity-first architecture we should have had all along. The protocol doesn’t just solve technical problems; it corrects the fundamental misalignment of incentives that has made the modern web increasingly hostile to both users and developers.
As we look to a future of increased digitization across all sectors of society, the need for this architectural shift becomes even more apparent. Individuals should be able to maintain and present their own verified digital identity and history, rather than being at the mercy of siloed institutional databases. The Solid protocol makes this future technically possible.
This essay was written with Davi Ottenheimer, and originally appeared on The Inrupt Blog.
Subliminal Learning in AIs
[2025.07.25] Today’s freaky LLM behavior:
We study subliminal learning, a surprising phenomenon where language models learn traits from model-generated data that is semantically unrelated to those traits. For example, a “student” model learns to prefer owls when trained on sequences of numbers generated by a “teacher” model that prefers owls. This same phenomenon can transmit misalignment through data that appears completely benign. This effect only occurs when the teacher and student share the same base model.
Interesting security implications.
I am more convinced than ever that we need serious research into AI integrity if we are ever going to have trustworthy AI.
Microsoft SharePoint Zero-Day
[2025.07.28] Chinese hackers are exploiting a high-severity vulnerability in Microsoft SharePoint to steal data worldwide:
The vulnerability, tracked as CVE-2025-53770, carries a severity rating of 9.8 out of a possible 10. It gives unauthenticated remote access to SharePoint Servers exposed to the Internet. Starting Friday, researchers began warning of active exploitation of the vulnerability, which affects SharePoint Servers that infrastructure customers run in-house. Microsoft’s cloud-hosted SharePoint Online and Microsoft 365 are not affected.
Here’s Microsoft on patching instructions. Patching isn’t enough, as attackers have used the vulnerability to steal authentication credentials. It’s an absolute mess. CISA has more information. Also these four links. Two Slashdot threads.
This is an unfolding security mess, and quite the hacking coup.
That Time Tom Lehrer Pranked the NSA
[2025.07.28] Bluesky thread. Here’s the paper, from 1957. Note reference 3.
Aeroflot Hacked
Measuring the Attack/Defense Balance
[2025.07.30] “Who’s winning on the internet, the attackers or the defenders?”
I’m asked this all the time, and I can only ever give a qualitative hand-wavy answer. But Jason Healey and Tarang Jain’s latest Lawfare piece has amassed data.
The essay provides the first framework for metrics about how we are all doing collectively—and not just how an individual network is doing. Healey wrote to me in email:
The work rests on three key insights: (1) defenders need a framework (based in threat, vulnerability, and consequence) to categorize the flood of potentially relevant security metrics; (2) trends are what matter, not specifics; and (3) to start, we should avoid getting bogged down in collecting data and just use what’s already being reported by amazing teams at Verizon, Cyentia, Mandiant, IBM, FBI, and so many others.
The surprising conclusion: there’s a long way to go, but we’re doing better than we think. There are substantial improvements across threat operations, threat ecosystem and organizations, and software vulnerabilities. Unfortunately, we’re still not seeing increases in consequence. And since cost imposition is leading to a survival-of-the-fittest contest, we’re stuck with perhaps fewer but fiercer predators.
And this is just the start. From the report:
Our project is proceeding in three phases—the initial framework presented here is only phase one. In phase two, the goal is to create a more complete catalog of indicators across threat, vulnerability, and consequence; encourage cybersecurity companies (and others with data) to report defensibility-relevant statistics in time-series, mapped to the catalog; and drive improved analysis and reporting.
This is really good, and important, work.
Cheating on Quantum Computing Benchmarks
[2025.07.31] Peter Gutmann and Stephan Neuhaus have a new paper—I think it’s new, even though it has a March 2025 date—that makes the argument that we shouldn’t trust any of the quantum factorization benchmarks, because everyone has been cooking the books:
Similarly, quantum factorisation is performed using sleight-of-hand numbers that have been selected to make them very easy to factorise using a physics experiment and, by extension, a VIC-20, an abacus, and a dog. A standard technique is to ensure that the factors differ by only a few bits that can then be found using a simple search-based approach that has nothing to do with factorisation…. Note that such a value would never be encountered in the real world since the RSA key generation process typically requires that |p-q| > 100 or more bits [9]. As one analysis puts it, “Instead of waiting for the hardware to improve by yet further orders of magnitude, researchers began inventing better and better tricks for factoring numbers by exploiting their hidden structure” [10].
A second technique used in quantum factorisation is to use preprocessing on a computer to transform the value being factorised into an entirely different form or even a different problem to solve which is then amenable to being solved via a physics experiment…
Lots more in the paper, which is titled “Replication of Quantum Factorisation Records with an 8-bit Home Computer, an Abacus, and a Dog.” He points out the largest number that has been factored legitimately by a quantum computer is 35.
I hadn’t known these details, but I’m not surprised. I have long said that the engineering problems between now and a useful, working quantum computer are hard. And by “hard,” we don’t know if it’s “land a person on the surface of the moon” hard, or “land a person on the surface of the sun” hard. They’re both hard, but very different. And we’re going to hit those engineering problems one by one, as we continue to develop the technology. While I don’t think quantum computing is “surface of the sun” hard, I don’t expect them to be factoring RSA moduli anytime soon. And—even there—I expect lots of engineering challenges in making Shor’s Algorithm work on an actual quantum computer with large numbers.
Spying on People Through Airportr Luggage Delivery Service
[2025.08.01] Airportr is a service that allows passengers to have their luggage picked up, checked, and delivered to their destinations. As you might expect, it’s used by wealthy or important people. So if the company’s website is insecure, you’d be able to spy on lots of wealthy or important people. And maybe even steal their luggage.
Researchers at the firm CyberX9 found that simple bugs in Airportr’s website allowed them to access virtually all of those users’ personal information, including travel plans, or even gain administrator privileges that would have allowed a hacker to redirect or steal luggage in transit. Among even the small sample of user data that the researchers reviewed and shared with WIRED they found what appear to be the personal information and travel records of multiple government officials and diplomats from the UK, Switzerland, and the US.
“Anyone would have been able to gain or might have gained absolute super-admin access to all the operations and data of this company,” says Himanshu Pathak, CyberX9’s founder and CEO. “The vulnerabilities resulted in complete confidential private information exposure of all airline customers in all countries who used the service of this company, including full control over all the bookings and baggage. Because once you are the super-admin of their most sensitive systems, you have have [sic] the ability to do anything.”
First Sentencing in Scheme to Help North Koreans Infiltrate US Companies
[2025.08.04] An Arizona woman was sentenced to eight-and-a-half years in prison for her role helping North Korean workers infiltrate US companies by pretending to be US workers.
From an article:
According to court documents, Chapman hosted the North Korean IT workers’ computers in her own home between October 2020 and October 2023, creating a so-called “laptop farm” which was used to make it appear as though the devices were located in the United States.
The North Koreans were hired as remote software and application developers with multiple Fortune 500 companies, including an aerospace and defense company, a major television network, a Silicon Valley technology company, and a high-profile company.
As a result of this scheme, they collected over $17 million in illicit revenue paid for their work, which was shared with Chapman, who processed their paychecks through her financial accounts.
“Chapman operated a ‘laptop farm’ where she received and hosted computers from the U.S. companies her home, so that the companies would believe the workers were in the United States,” the Justice Department said on Thursday.
“Chapman also shipped 49 laptops and other devices supplied by U.S. companies to locations overseas, including multiple shipments to a city in China on the border with North Korea. More than 90 laptops were seized from Chapman’s home following the execution of a search warrant in October 2023.”
Surveilling Your Children with AirTags
[2025.08.05] Skechers is making a line of kid’s shoes with a hidden compartment for an AirTag.
The Semiconductor Industry and Regulatory Compliance
[2025.08.06] Earlier this week, the Trump administration narrowed export controls on advanced semiconductors ahead of US-China trade negotiations. The administration is increasingly relying on export licenses to allow American semiconductor firms to sell their products to Chinese customers, while keeping the most powerful of them out of the hands of our military adversaries. These are the chips that power the artificial intelligence research fueling China’s technological rise, as well as the advanced military equipment underpinning Russia’s invasion of Ukraine.
The US government relies on private-sector firms to implement those export controls. It’s not working. US-manufactured semiconductors have been found in Russian weapons. And China is skirting American export controls to accelerate AI research and development, with the explicit goal of enhancing its military capabilities.
American semiconductor firms are unwilling or unable to restrict the flow of semiconductors. Instead of investing in effective compliance mechanisms, these firms have consistently prioritized their bottom lines—a rational decision, given the fundamentally risky nature of the semiconductor industry.
We can’t afford to wait for semiconductor firms to catch up gradually. To create a robust regulatory environment in the semiconductor industry, both the US government and chip companies must take clear and decisive actions today and consistently over time.
Consider the financial services industry. Those companies are also heavily regulated, implementing US government regulations ranging from international sanctions to anti-money laundering. For decades, these companies have invested heavily in compliance technology. Large banks maintain teams of compliance employees, often numbering in the thousands.
The companies understand that by entering the financial services industry, they assume the responsibility to verify their customers’ identities and activities, refuse services to those engaged in criminal activity, and report certain activities to the authorities. They take these obligations seriously because they know they will face massive fines when they fail. Across the financial sector, the Securities and Exchange Commission imposed a whopping $6.4 billion in penalties in 2022. For example, TD Bank recently paid almost $2 billion in penalties because of its ineffective anti-money laundering efforts
An executive order issued earlier this year applied a similar regulatory model to potential “know your customer” obligations for certain cloud service providers.
If Trump’s new license-focused export controls are to be effective, the administration must increase the penalties for noncompliance. The Commerce Department’s Bureau of Industry and Security (BIS) needs to more aggressively enforce its regulations by sharply increasing penalties for export control violations.
BIS has been working to improve enforcement, as evidenced by this week’s news of a $95 million penalty against Cadence Design Systems for violating export controls on its chip design technology. Unfortunately, BIS lacks the people, technology, and funding to enforce these controls across the board.
The Trump administration should also use its bully pulpit, publicly naming companies that break the rules and encouraging American firms and consumers to do business elsewhere. Regulatory threats and bad publicity are the only ways to force the semiconductor industry to take export control regulations seriously and invest in compliance.
With those threats in place, American semiconductor firms must accept their obligation to comply with regulations and cooperate. They need to invest in strengthening their compliance teams and conduct proactive audits of their subsidiaries, their customers, and their customers’ customers.
Firms should elevate risk and compliance voices onto their executive leadership teams, similar to the chief risk officer role found in banks. Senior leaders need to devote their time to regular progress reviews focused on meaningful, proactive compliance with export controls and other critical regulations, thereby leading their organizations to make compliance a priority.
As the world becomes increasingly dangerous and America’s adversaries become more emboldened, we need to maintain stronger control over our supply of critical semiconductors. If Russia and China are allowed unfettered access to advanced American chips for their AI efforts and military equipment, we risk losing the military advantage and our ability to deter conflicts worldwide. The geopolitical importance of semiconductors will only increase as the world becomes more dangerous and more reliant on advanced technologies—American security depends on limiting their flow.
This essay was written with Andrew Kidd and Celine Lee, and originally appeared in The National Interest.
China Accuses Nvidia of Putting Backdoors into Their Chips
[2025.08.07] The government of China has accused Nvidia of inserting a backdoor into their H20 chips:
China’s cyber regulator on Thursday said it had held a meeting with Nvidia over what it called “serious security issues” with the company’s artificial intelligence chips. It said US AI experts had “revealed that Nvidia’s computing chips have location tracking and can remotely shut down the technology.”
Google Project Zero Changes Its Disclosure Policy
[2025.08.08] Google’s vulnerability finding team is again pushing the envelope of responsible disclosure:
Google’s Project Zero team will retain its existing 90+30 policy regarding vulnerability disclosures, in which it provides vendors with 90 days before full disclosure takes place, with a 30-day period allowed for patch adoption if the bug is fixed before the deadline.
However, as of July 29, Project Zero will also release limited details about any discovery they make within one week of vendor disclosure. This information will encompass:
- The vendor or open-source project that received the report
- The affected product
- The date the report was filed and when the 90-day disclosure deadline expires
I have mixed feelings about this. On the one hand, I like that it puts more pressure on vendors to patch quickly. On the other hand, if no indication is provided regarding how severe a vulnerability is, it could easily cause unnecessary panic.
The problem is that Google is not a neutral vulnerability hunting party. To the extent that it finds, publishes, and reduces confidence in competitors’ products, Google benefits as a company.
Automatic License Plate Readers Are Coming to Schools
[2025.08.11] Fears around children is opening up a new market for automatic license place readers.
The “Incriminating Video” Scam
[2025.08.12] A few years ago, scammers invented a new phishing email. They would claim to have hacked your computer, turned your webcam on, and videoed you watching porn or having sex. BuzzFeed has an article talking about a “shockingly realistic” variant, which includes photos of you and your house—more specific information.
The article contains “steps you can take to figure out if it’s a scam,” but omits the first and most fundamental piece of advice: If the hacker had incriminating video about you, they would show you a clip. Just a taste, not the worst bits so you had to worry about how bad it could be, but something. If the hacker doesn’t show you any video, they don’t have any video. Everything else is window dressing.
I remember when this scam was first invented. I calmed several people who were legitimately worried with that one fact.
SIGINT During World War II
[2025.08.13] The NSA and GCHQ have jointly published a history of World War II SIGINT: “Secret Messengers: Disseminating SIGINT in the Second World War.” This is the story of the British SLUs (Special Liaison Units) and the American SSOs (Special Security Officers).
AI Applications in Cybersecurity
[2025.08.13] There is a really great series of online events highlighting cool uses of AI in cybersecurity, titled Prompt||GTFO. Videos from the first three events are online. And here’s where to register to attend, or participate, in the fourth.
Some really great stuff here.
LLM Coding Integrity Breach
[2025.08.14] Here’s an interesting story about a failure being introduced by LLM-written code. Specifically, the LLM was doing some code refactoring, and when it moved a chunk of code from one file to another it changed a “break” to a “continue.” That turned an error logging statement into an infinite loop, which crashed the system.
This is an integrity failure. Specifically, it’s a failure of processing integrity. And while we can think of particular patches that alleviate this exact failure, the larger problem is much harder to solve.
Davi Ottenheimer comments.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, A Hacker’s Mind—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.
Copyright © 2025 by Bruce Schneier.