
South Sudan’s government recently announced that it has acquired a device to track online hate speech and incitement, and that it can identify users spreading harmful content.
“The Minister of Information, Communication Technology and Postal Services, Ateny Wek, Ateny, has announced that the government has acquired a device capable of identifying individuals who use social media platforms to spread hate speech and incite violence,” says a report by Eye Radio.
The claim suggests a single machine can pinpoint who posts offensive or dangerous material on social media. In reality, no such standalone device exists according to experts. Tracing online speech to a real person is a multi-step process involving content detection tools, metadata from telecoms and platforms, legal orders, and human investigation.
Understanding the limits of technology matters, especially as South Sudan approaches elections, when political messaging and public debate intensify and online content can influence perceptions and behavior.
Debunking the Claim:
A Google keyword search returns results showing that a single, universally effective “device” (in the sense of a standalone piece of hardware or a single, unified, infallible software program) does not exist that can identify all individuals spreading hate speech and inciting violence across all social media platforms worldwide.
Automated tools can detect content, but cannot identify authors
Technology can help identify harmful content online. Software tools known as social media monitors scan public posts for keywords, images, or patterns associated with hate speech, violence, or misinformation. These systems can alert authorities or researchers when certain narratives spike or specific phrases trend.
But these tools answer, What content is being shared? They do not answer, Who is the actual person behind it?
Academic research on hate speech detection highlights this limitation. Automated systems can classify content as potentially harmful based on patterns, but they do not determine the identity of the user who created it. Algorithms classify data but do not uncover personal identities by themselves.
Independent evidence shows harmful online content but not device tracing
Independent monitoring shows that harmful online content is a documented concern in South Sudan. A February 2025 report by the Collaboration on International ICT Policy for East and Southern Africa (CIPESA) found hundreds of incidents of harmful and misleading narratives across platforms such as Facebook, WhatsApp, X (formerly Twitter) and TikTok, often tied to political tensions. The report described how misinformation and hate speech spread across platforms, illustrating the scale of the problem. (CIPESA, 2025)
This evidence demonstrates that harmful content exists and is being monitored by independent organisations. It does not, however, show that a single device is capable of attributing such content to specific individuals.
Tracing a post to a person requires cooperation with platforms and telecoms
• IP addresses showing where a device connected to the internet
• Phone numbers linked to accounts
• Device identifiers
• Login records and timestamps
Social media platforms and telecom operators keep this data, but they do not release it automatically. In most jurisdictions, companies disclose such information only after formal, lawful requests such as court orders. Metadata — like the times and places an account was accessed — can help investigators narrow down possibilities, but it rarely identifies a person with absolute certainty.
Dynamic IP addresses, shared networks and devices used by multiple people make it even harder to assign a post to a specific individual, especially in contexts where phone sharing is common.
Encryption and privacy settings limit direct content access
Many platforms where harmful content spreads use end-to-end encryption. Services like WhatsApp encrypt message contents so that only the sender and recipient can see them. Even the platforms themselves cannot read the actual messages.
At best, metadata can show that a message was sent between two devices at a particular time. The content itself stays encrypted. This means that governments, even with advanced tools, cannot easily decode or “track” hate speech in private communications.
Real investigation requires digital forensics and human analysis
Linking harmful content to individuals in a legally defensible way typically involves:
• Device seizure and forensic analysis
• Cross-referencing metadata from providers
• Content verification
• Timeline reconstruction
• Witness or account evidence
These steps require trained specialists, legal backing, secure evidence handling and human review. None of this happens by turning on a “device.” What tools can do is assist investigators, not replace them.
Legal claims must be viewed carefully
Government officials have cited legal frameworks as the basis for action against online harms, including references to the Cyber Crime and Computer Misuse Act. This law was passed by South Sudan’s legislature in November 2025 and expands the scope of online offences, including the publication of false information and other cybercrime categories.
Other laws, such as the Criminal Procedure Act (2008) and the National Security Act (2013, amended 2015), are also cited by authorities. These laws may provide a legal basis for investigation and prosecution.
Digital rights context: monitoring ≠ attribution
Reports such as the State of Internet Freedom in Africa 2023 show that governments across Africa implement varied levels of internet monitoring and surveillance. These measures often raise concerns about freedom of expression and privacy, especially in politically sensitive periods.
Such reports underline a key point: technology — even when used for content monitoring or policy enforcement — is not a standalone solution for identifying individual users. Attribution requires legal process, rights safeguards and transparent procedures.
Why the idea of a “device” is misleading
If authorities describe a device as capable of automatically identifying individuals responsible for online hate speech, that description overstates what technology can do on its own.
A “device” in this context may be a server, software or monitoring dashboard used to detect harmful content patterns. These tools are useful for tracking trends and highlighting potentially problematic material. But they do not, by themselves, reveal the identities of those who post it.
The actual work of tracing content to individuals involves a chain of evidence and cooperation with platforms and providers combined with legal authority and human review.
Why this matters during elections
Online platforms are central to public discourse, especially in election seasons. Misinformation and harmful content can affect public perception, deepen divisions and reduce trust in institutions. Addressing these risks is important.
At the same time, overstating technological capabilities can lead to false confidence, misdirected enforcement and potential infringement of freedom of expression. When people believe a device can automatically and accurately identify them, it may discourage legitimate speech, criticism and reporting.
International digital rights organisations emphasise the importance of clear definitions, transparent processes and independent oversight when governments regulate online speech and investigate digital harms.
Conclusion
No single device can track hate speech online and attribute it to specific users. What exists are technologies that detect harmful content patterns and support human investigators. Attribution requires data, legal process, platform cooperation and careful review.
As South Sudan continues its election journey, the public and policymakers must focus on realistic methods to address online harmful content while protecting rights, transparency and trust in democratic processes.
This article is published by The ClarityDesk, with the support of the Election Civic Tech Fund of AfricTivistes, within the AHEAD Africa and Digitalise Youth projects, led by the Digital Democracy Initiative.
Have you spotted an error in this article and would like to request a correction, or have you come across a claim that we should investigate? Please send us an email via editor@claritydesk.org or click here to WhatsApp us via +211 928 606 958.







