
Last night, I did a quick data pull and social network analysis of 5,000 English-language X posts about the Iran/Israel/US conflict and its regional spillover. The dataset captured 8,161 interactions between 6,426 accounts during what turned out to be one of the most active periods of online conversation since Operation Epic Fury began. The findings raise some questions worth sitting with.
The Network at a Glance
The header image above maps those interactions as a network graph. Node and label size are scaled by eigenvector centrality, meaning influence is weighted by the influence of your connections. This isn’t just a measure of how many people tagged an account. It reflects whether the most connected, most active people in the network tagged it. That distinction matters considerably when interpreting what you’re looking at.
Every coloured strand is an edge (i.e. a reply, mention, or quote tweet). The seven distinct colour bands represent the seven largest communities in the network, each accounting for at least 1% of all accounts (the largest / purple is about 26%). Together, they represent 40.6% of the full dataset. Nearly 60% of the network from the data sample isn’t even shown here.
And yet, with all of that filtering applied, one node still dominates the entire frame. The most central account in this conversation isn’t a journalist, a politician, or a media outlet. It’s @grok, X’s own AI chatbot.
What Happens When You Remove Grok
To get a clearer picture of how much structural weight Grok is carrying, I removed it entirely and applied a giant component filter to isolate the human interaction network.

What’s left is 1,172 nodes — 18.19% of the original network. Remove one AI node, and over 80% of the connected structure disappears. The second image is that network. It looks nothing like the first one.
Grok’s dominance here isn’t just visual. It’s structural. The network is held together largely by people routing their activity through an AI intermediary rather than through each other. That’s a meaningful shift in how online discourse functions during a crisis.
Who the Human Hubs Are
Once Grok is removed, the accounts that emerge as hubs tell their own story: @tuckercarlson, @berniesanders, @repjasoncrow, @secrubio, @whitehouse, @idf, and @osint613.
Right-wing media. Progressive politics. Congressional Democrats. A Republican senator. The official White House account. The Israeli military publishing real-time operational updates from an active warzone. And an Israel-based open source intelligence account covering the conflict from the inside.
Each sits at the center of its own community, and none of those communities connect to each other. This isn’t just a politically divided network. It’s one where combatants, intelligence aggregators, and domestic political voices all operate in entirely separate silos with no cross-talk between them. The divide here isn’t just rhetorical. It’s structural.
The Crisis Information Problem
The dataset mixes legitimate breaking news, unverified eyewitness reports, clear disinformation, and AI-generated fake content presented as real. People across all seven communities are routing their sense-making through Grok, an AI that retrieves live X posts in real time and uses them as context to generate its answers. It isn’t interpreting this crisis from a safe distance. It’s pulling directly from the same feed everyone else is working through.
Grok isn’t a neutral aggregator. When thousands of people consult the same AI simultaneously during a fast-moving conflict, fabrications don’t just spread. They get laundered through a source that feels authoritative. And the problem runs deeper than text.
The Synthetic Media Problem
Grok has no reliable way to determine in real time whether footage circulating during a live conflict is genuine or AI-generated. Posts claiming to show missile strikes, explosions, and ground-level combat are moving through all seven communities simultaneously, and that content can be generated by any number of AI image-generation tools.
X rolled out a “Made with AI” label on March 1, two days before this data was collected, which is a step in the right direction. But the system relies on self-reporting by creators. During a conflict in which bad actors deliberately circulate synthetic media as real footage, voluntary disclosure is not a meaningful safeguard.
What’s needed is platform-level content provenance built on standards like Google’s SynthID watermarking and C2PA, an open standard backed by Adobe, Microsoft, Google, OpenAI, and Midjourney. These tools embed verifiable signals directly into AI-generated content at the point of creation, and major AI generators are already adopting them. The problem is that C2PA metadata gets stripped when images are screenshotted and re-uploaded, which is exactly how conflict footage travels on X. The infrastructure exists. The gap is deployment at the platform level, in real time, at scale. I wrote about content provenance standards in more depth back in November 2024, if you want to go further on this.
X’s Community Notes Feature Isn’t Filling the Gap
The crowdsourced safety net isn’t compensating either. Research published in January 2026 found a substantial drop in the number of X’s Community Notes being proposed since Grok’s rise on the platform, with researchers suggesting people are treating the AI as a direct substitute for human fact-checking. During a fast-moving conflict, that gap is significant. Misinformation spreads faster than community consensus can form. And because none of these communities are talking to each other, there is no cross-community correction. Each community gets its own version of events, with no friction between them.
The Bigger Picture
What this network shows is a structural shift in how people process breaking geopolitical events online. The AI hasn’t just entered the conversation. In this dataset, it owns the conversation.
That should prompt serious questions about platform design, about AI transparency during crises, and about what happens to public discourse when a single non-human node becomes the dominant information broker in a conflict with real-world consequences. If you work in communications, intelligence, journalism, or policy, this is worth paying attention to.
I’ll keep running pulls as this conflict develops. The network will change. Whether Grok’s role in it does is the more interesting question.
Methodology
| Parameter | Details |
|---|---|
| Data collection | NodeXL via X Search API |
| Visualization | Gephi |
| Dataset | 5,000 X posts, English-language only |
| Timeframe | March 3, 2026, approx. 22:00-23:59 UTC (99.6% of dataset) |
| Total interactions | 8,161 |
| Total accounts | 6,426 |
| Node/label size | Eigenvector centrality |
| Community detection | Modularity class |
| Image 1 | Top 7 communities (above 1% each), 40.6% of full network |
| Image 2 | Grok removed, giant component filter applied, 18.19% of full network |
Note: NodeXL collects via the X Search API in reverse chronological order and stops at the set limit. Despite a March 1-3 collection window, 99.6% of this data falls within a two-hour peak window on the evening of March 3. This is a snapshot of peak crisis conversation, not a representative sample across the full period.
Tracking AI’s role in crisis information environments? I’d like to hear what you’re seeing. Find me on X or LinkedIn and drop a note.
