[ad_1]
With the Israel-Hamas war intensifying by the day, many people are desperate for accurate information about the conflict. Getting it has proven difficult. This has been most apparent on Elon Musk’s X, formerly Twitter, where insiders say even the company’s primary fact-checking tool, Community Notes, has been a source of disinformation and is at risk of coordinated manipulation.
Case in point: An explosion at a hospital in Gaza on Tuesday was followed by a wave of mis- and disinformation around the cause. In the hours following the explosion, Hamas blamed Israel, Israel blamed militants in Gaza, mainstream media outlets repeated both sides’ claims without confirmation either way, and people posing as open source intelligence experts rushed out dubious analyses. The result was a toxic mix of information that made it harder than ever to know what’s real.
On Thursday, the United States Department of the Treasury proposed plans to treat foreign-based cryptocurrency “mixers”—services that obscure who owns which specific coins—as suspected money laundering operations, citing as justification crypto donations to Hamas and the Palestinian Islamic Jihad, a Gaza-based militant group with ties to Hamas that Israel blamed for the hospital explosion. While these types of entities do use mixers, experts say they do so far less than criminal groups linked to North Korea and Russia—likely the real targets of the Treasury’s proposed crackdown.
In Myanmar, where a military junta has been in power for two years, people who speak out against deadly air strikes on social media are being systematically doxed on pro-junta Telegram channels. Some were later tracked down and arrested.
Finally, the online ecosystem of AI-generated deepfake pornography is quickly spiraling out of control. The number of websites specializing in and hosting these faked, nonconsensual images and videos has greatly increased in recent years. With the rise of generative AI tools, creating these images is quick and dangerously easy. And finding them is trivial, researchers say. All you have to do is a quick Google or Bing search, and this invasive content is a click away.
That’s not all. Each week, we round up the security and privacy stories we didn’t cover in-depth ourselves. Click the headlines to read the full stories, and stay safe out there.
The recent theft of user data from genetics testing giant 23andMe may be more expansive than previously thought. On October 6, the company confirmed a trove of user data had been stolen from its website, including names, years of birth, and general descriptions of genetic data. The data related to hundreds of thousands of users of Chinese descent and primarily targeted Ashkenazi Jews. This week, a hacker claiming to have stolen the data posted millions of more records for sale on the platform BreachForums, TechCrunch reports. This time, the hacker claimed, the records pertained to people from the United Kingdom, including “the wealthiest people living in the US and Western Europe on this list.” A 23andMe spokesperson tells The Verge that the company is “currently reviewing the data to determine if it is legitimate.”
According to 23andMe, its systems were not breached. Instead, it said, the data theft was likely due to people reusing passwords on their 23andMe accounts that were exposed in past breaches and then used to access their accounts. If you need some motivation to stop recycling passwords, this is it.
The US Department of Justice on Wednesday said it had uncovered a vast network of IT workers who were collecting paychecks from US-based companies then sending that money to North Korea. The freelance IT workers are accused of sending millions of dollars to Pyongyang, which used the funds to help build its ballistic missile program. While the workers allegedly pretended to live and work in the US, the DOJ says they often lived in China and Russia and took steps to obscure their real identities. According to an FBI official involved in the case, it’s “more than likely” that any freelance IT worker a US company hired was part of the plot.
Searching online may have just gotten a little bit more dangerous. On Monday, a Colorado Supreme Court upheld police use of a so-called keyword search warrant. Using this type of warrant, law enforcement demands companies like Google hand over the identities of anyone who searched for specific information. This is the opposite of how traditional search warrants work, where cops identify a suspect and then use search warrants to obtain information about them.
Keyword search warrants have long been criticized as “fishing expeditions” that violate the US Constitution’s Fourth Amendment rights against unreasonable searches and seizures, because it potentially hands police information about innocent people who searched for a specific term but were not involved in any related crime.
[ad_2]
Matéria ORIGINAL wired