#BrainUp Daily Tech News – (Tuesday September 9ᵗʰ)

#BrainUp Daily Tech News – (Tuesday September 9ᵗʰ)

Welcome to today’s curated collection of interesting links and insights for 2025/09/09. Our Hand-picked, AI optomized system has processed and summarized 22 articles from all over the internet to bring you the key the latest technology news.

As previously aired🔴LIVE on Clubhouse, Chatter Social, and TikTok.

Also available as a #Podcast on Apple 📻, Spotify🛜, Anghami and Amazon🎧 or anywhere else you listen to podcasts.

1. Trump’s Policies Are Shutting Out Americans From the Coolest New Gadgets

Trump’s tariffs are reshaping IFA 2025 into a preview of prices and availability American shoppers may not see soon, underscoring policy pressure on the tech market under @Trump. At the Berlin show, many companies refused to disclose prices or confirm U.S. launch plans, and DJI has been soft-banned from importing gear into the U.S. The result is ongoing uncertainty around pricing, with the TCL QM9K TV’s price left undisclosed and other products slated for U.S. availability unclear. Even brands that publicly praised the administration face silence on U.S. availability, while firms like Roborock, Anker, Mova, and Dreame say their newer devices may not reach the U.S. soon. Overall, the article argues that tariffs threaten a future of affordable, plentiful gadgets and leave U.S. consumers watching as #tariffs and #IFA2025 dynamics shape what can actually be bought.


2. ASML becomes main shareholder in Mistral AI after 1.3B euro deal

ASML @ASML acquires an 11% stake in @MistralAI for €1.3B, valuing Mistral at €10B and making it the most valuable AI company in Europe. The investment comes as part of a new €1.7B financing round that grants ASML a majority stake and a seat on Mistral’s board. Mistral, founded in 2023 by former DeepMind and Meta researchers, is known for open-source models and multilingual AI, with capabilities spanning #OCR and potential production-line deviation detection. Bank of America reportedly advised ASML on the deal, underscoring a European tech collaboration where ASML’s lithography expertise could benefit from Mistral’s AI data analysis and product improvement, and where EUV systems used in chip manufacturing cost around €150M each. The move signals Europe’s effort to bolster its AI ecosystem against @OpenAI, @Google, @Anthropic and @Meta, while highlighting the synergies between #open-source #OCR #AI and Europe’s tech leaders.


3. Signal users can now back up and restore messages safely

Signal has introduced a new feature allowing users to back up and restore their messages securely, addressing a previously missing functionality. This update ensures users can create encrypted local backups without compromising their strong privacy and security standards. The feature encrypts backups with a user-defined password, preventing unauthorized access even if the backup file is intercepted or stolen. This development simplifies message migration for users switching devices while maintaining Signal’s commitment to privacy protection. It marks a significant enhancement in user experience by combining convenience with robust security.


4. Linus Torvalds has had enough of pointless Link: tags

@Linus Torvalds has had enough of pointless Link: tags, insisting that links must provide additional information and not waste time. He wrote on the Linux kernel mailing list that links should explain why a commit was made and ideally point to a thread rather than to an acked-by email. This stance sits alongside ongoing development, as the Linux 6.17-rc5 release candidate rolls out with improvements in memory management, #Rust toolchain support, and broader hardware coverage for #AMD and #Intel, including Bartlett Lake in the #PMC_driver. He notes that links should offer real context rather than simply repeat messages that were already posted. While acknowledging he’s grumpy, @Linus Torvalds emphasizes his main job is to make sense of pull requests, and detested automatically added links that make that task harder.


5. Nova Launcher’s founder and sole developer has left

Nova Launcher’s founder and sole developer, @Kevin Barry, has left Nova’s parent company after being told to stop work on open-sourcing the launcher. Barry had been pursuing #open-source plans for the launcher, and a statement from @Alex Austin, Branch Metrics’ co-founder, suggested that if Barry left, the code would be open-sourced and handed to the community. @Alex Austin left Branch in 2023, and with Barry gone, newer leadership has reportedly deprioritized Nova, treating it as an app Branch owns rather than a project to advance toward open-sourcing. @Cliff Wade, Nova’s former customer relations lead who left in 2024, says the current leadership doesn’t care about Nova and will not invest time unless the community applies pressure. Thus, the future of #open-source #NovaLauncher remains unclear, with a 404 on the Nova Launcher site and no comment from Barry as users push via petitions while the app remains available on Google Play.


6. ICE Spends Millions On Clearview AI Face Recognition To Find People Assaulting Officers

Immigration and Customs Enforcement (#ICE) has invested millions of dollars in Clearview AI’s facial recognition technology to identify individuals who assault officers. Clearview AI aggregates publicly available images from social media and other sources to build an extensive database, which ICE uses to enhance investigations and capture suspects more effectively. The expenditure reflects ICE’s reliance on advanced surveillance tools to support law enforcement missions amid growing concerns about privacy and ethical implications. This deployment demonstrates how technology companies like @Clearview AI are increasingly integrated into government operations, raising questions about transparency and oversight. The use of this facial recognition technology highlights a trend towards expanding digital tools in policing and immigration enforcement.


7. WhatsApp Whistleblower Lawsuit Exposes Company’s Secret Surveillance Practices

A former WhatsApp safety engineer has filed a lawsuit alleging the company engaged in secretive surveillance of user chats, raising privacy concerns about the popular messaging app owned by Meta. The engineer claims that WhatsApp developed tools to scan private messages for law enforcement despite promises of end-to-end encryption, suggesting internal conflicts over privacy policies. Evidence from leaked documents and internal communications reveal a covert program designed to detect and report certain content without users’ knowledge, implicating tensions between user privacy and regulatory compliance. This lawsuit highlights the challenges tech firms face balancing secure communication platforms and government demands, questioning the integrity of privacy guarantees to users worldwide. The case may prompt greater scrutiny of #surveillance practices at major tech companies and the role of whistleblowers in safeguarding digital privacy.


8. Anthropic Judge Blasts $1.5 Billion AI Copyright Settlement (2)

The federal judge overseeing Anthropic PBC’s proposed $1.5 billion copyright settlement criticized the process and warned the deal is nowhere close to done #classaction. At the hearing, @William Alsup said he felt misled, delayed approval pending clearer information on the class member claim process, and questioned behind the scenes deals that could be forced down the throat of authors. The order noted important questions remain, including works covered by the deal and how potential class members will be notified, signaling that the settlement requires substantial redesign before preliminary approval #claims. Alsup urged a robust opt-in mechanism where ownership disputes lead to state court resolution, and instructed a claim form requiring copyright owners to opt in, while warning against risk of later challenges if everyone is paid #ownership. The dispute could reshape AI copyright settlements and set a benchmark as negotiations involve @OpenAI, @Meta, and @Midjourney, with implications for authors, publishers, and future #classactions.


9. Judge skewers $1.5B Anthropic settlement with authors in pirated books case over AI training

A federal judge sharply criticized the proposed $1.5B settlement between @Anthropic and authors who allege that nearly half a million books were pirated to train #AI systems. The judge’s remarks indicate the deal may not end the dispute and could still go to trial, given concerns about how training data is sourced in the case. The ruling highlights ongoing tensions between compensating authors and rapidly evolving #copyright and #training practices, and it may force tougher scrutiny of settlements in tech copyright cases. How this unfolds could shape future settlements and the likelihood of a trial in similar #copyright disputes.


10. Apple’s India Sales Hit Record $9 Billion After Big Retail Push

Apple has achieved a record $9 billion in sales in India, driven by an aggressive expansion of its retail presence across the country. The company’s growth is supported by its increased focus on #localization and building a robust ecosystem in the rapidly growing Indian market. This includes establishing more Apple Stores and enhancing partnerships with local distributors, which contributed to boosting consumer access and brand loyalty. The success reflects Apple’s strategic adaptation to India’s unique market dynamics and celebrates the rising demand for premium smartphones in the region. This milestone underscores Apple’s commitment to growing its footprint in emerging markets and diversifying its global revenue streams.


11. Sam Altman says that bots are making social media feel ‘fake’ | TechCrunch

@Sam Altman says bots are making social media feel fake after reading Reddit posts praising @OpenAI Codex, underscoring a growing doubt about whether posts are written by humans. He posted on X describing how real people seem to adopt quirks of #LLM-speak, how the Extremely Online crowd drifts in correlated ways, and how engagement incentives and creator monetization can juice the prevalence of inauthentic content, including the possibility of bots. He also notes signs that OpenAI’s own communities have been astroturfed and points to a shift in sentiment on OpenAI subreddits after the GPT-5 release, where angry posts gained visibility. The reflection links to the idea that LLMs were built to mimic human communication, which, when combined with platform incentives, can erode trust in social media and complicate discerning genuine human voices, a challenge for online discourse #OpenAI #Codex #astroturfing


12. Attorney says detained Korean Hyundai workers had special skills for short-term jobs

Detained South Korean workers at a Hyundai plant in Georgia were brought in for highly specialized tasks that Americans aren’t trained to perform, according to an immigration attorney. The attorney says these workers possessed skills tied to short-term, niche work that would not be readily filled by the domestic workforce. If this account is accurate, it suggests @Hyundai relied on foreign #skilledwork for specific tasks, highlighting how labor needs can hinge on foreign expertise rather than broad domestic training, a point relevant to #immigration and workforce policy. The story frames the raid in the context of broader debates over labor shortages, guest-work programs, and the economics of importing specialized skills.


13. AI’s existential threat: Researcher predicts 99% unemployment by 2030 without safety measures

AI safety researcher @Roman Yampolskiy warns that without proper safety measures, artificial intelligence could pose an existential threat to humanity, with a 99.999999% probability of ending humanity. In a discussion with @Steven Bartlett, he argues that within five years unemployment could reach 99%, far higher than concerns about 10%. He says it may become uneconomical to hire humans as AI tools automate both digital tasks and, within a few years, physical labor through #humanoid_robots. He adds that so-called future-proof roles like coding and #prompt_engineering will be displaced, noting that AI is better at designing prompts than humans. The article suggests retraining may not help because if all jobs are automated there is no plan B, and it frames this warning amid a race among major labs like @OpenAI, @Google, and @Anthropic that prioritize profitability over safety, a view echoed by @Bill Gates.


14. Anthropic and DeepMind staff start hunger strike to urge AI pause over 2025 threat

Employees at AI labs Anthropic and DeepMind have initiated a hunger strike to demand a pause on developing powerful AI systems until safety measures are improved. The strike highlights growing concern among AI researchers about the rapid pace of progress and potential risks of advanced #ArtificialIntelligence by 2025, such as loss of control or misuse. This protest reflects internal dissent urging organizations and policymakers to prioritize robust #AI regulation and safety protocols. The hunger strike stresses the need for a collective pause to evaluate the implications and develop governance frameworks to mitigate existential risks. The actions by @Anthropic and @DeepMind staff underscore the urgency to address ethical and security challenges in AI before accelerating development.


15. In court filing, Google concedes the open web is in rapid decline

Google acknowledged in a court filing that the open web is rapidly declining, noting that user engagement is increasingly shifting towards private apps and platforms controlled by major tech companies. This transition disrupts traditional web standards and open access to information, as more content consumption happens within closed ecosystems rather than through open websites. The decline poses challenges to the principles of openness and interoperability that have historically shaped the internet. By highlighting this trend, Google underscores the need to adapt strategies and potentially reevaluate regulatory frameworks to address the growing dominance of walled gardens. This shift impacts users’ autonomy over data and choices online, reflecting larger industry transformations in digital consumption and content distribution.


16. Vodafone’s new ad proves even influencers can be replaced by AI

Vodafone Germany is testing AI-generated advertising by using a non-existent TikTok spokesperson to promote a cashback offer, showing that AI influencers are entering mainstream marketing. The clip features a young brunette whose signs of being synthetic, like disappearing moles, underscore that she isn’t real, and Vodafone says it is experimenting with the technology after previously releasing a 100% AI-produced ad, ‘The Rhythm of Life’; the broader AI-influencer ecosystem includes @LilMiquela who has 2.4M followers and commands large fees. This shift signals brands are moving spend toward synthetic personas and #AI in #advertising, while industry voices warn of displacement: Stanford research shows early-career workers facing declines, @DarioAmodei predicts substantial job loss, @GeoffreyHinton warns of unemployment spikes, and @BillGates cautions that AI experience may not shield workers. The Vodafone experiment fits a wider trend that Fortune Global Forum will examine as leaders weigh technology, jobs, and the future of media.


17. UK Age Verification Data Confirms Mass Migration To Sketchier Sites

The UK government’s age verification system for adult websites has caused a significant migration of users to less secure and potentially riskier sites. Data reveals that after the enforcement of #ageverification policies, many visitors abandoned mainstream platforms that complied, opting instead for sketchier websites lacking robust protections. This outcome supports critics’ warnings that strict government measures unintentionally drive users towards unregulated environments, increasing exposure to harmful content and cyber risks. The situation underscores the unintended consequences of digital regulation without comprehensive user safety frameworks. The UK example illustrates the challenges in balancing online protection laws and realistic user behavior patterns.


18. Meta suppressed children’s safety research, 4 whistleblowers claim | TechCrunch

Meta is accused by four current and former employees of suppressing research on children’s safety and changing its research policies after @Frances Haugen’s 2021 disclosures. The documents turned over to Congress, reported by The Washington Post, allege two ways researchers could limit risk: loop lawyers into research to shield communications under attorney-client privilege and describe findings more vaguely to avoid terms not compliant or illegal. In a specific case, former Meta VR researcher Jason Sattizahn says a manager forced him to delete interview recordings where a teen described sexual propositions involving Horizon Worlds. Meta defends itself, saying privacy regulations require deletion of minors’ data under 13 and that it has approved nearly 180 Reality Labs studies on youth safety and well-being since 2022. The whistleblower claims contribute to ongoing congressional scrutiny and align with a February lawsuit by Kelly Stonelake, a former Meta employee, highlighting tensions between youth safety, research freedom, and corporate risk management. #RealityLabs #HorizonWorlds #youthsafety #attorneyclientprivilege


19. Human stem cells age more rapidly in space, study finds

Human hematopoietic stem and progenitor cells (HSPCs) exposed to about a month in space showed signs of accelerated aging, highlighting how the space environment can affect blood and immune cell formation. In 32-45 day experiments aboard the ISS using nanobioreactors, the ISS-exposed cells exhibited reduced self-renewal, greater DNA-damage susceptibility, and mitochondrial inflammation. Importantly, the damage appeared at least partly reversible once the cells returned to Earth. @Catriona Jamieson notes that space stressors such as #microgravity and #cosmicradiation can accelerate molecular aging of blood stem cells, with implications for protecting astronauts on long missions and for modeling human aging and cancer here on Earth. This study underscores how #HSPCs respond to extreme environments and informs both space biology and terrestrial aging research.


20. Undersea cables in the Red Sea are cut causing internet disruptions in the Middle East

Multiple undersea internet cables in the Red Sea were cut, causing significant internet disruptions across the Middle East, including in countries such as Egypt, Saudi Arabia, Sudan, and Yemen. These cuts, which occurred at several locations, severely affected internet capacity and normal access. The cause is under investigation, with the timing raising concerns over possible intentional sabotage given the strategic location and regional conflicts. This disruption highlights the vulnerability of critical #infrastructure like undersea communication cables to both accidental failures and deliberate attacks. Restoring full service could take days, impacting millions and emphasizing the crucial need for secure, resilient internet connectivity in geopolitically sensitive regions.


21. Wired Is Google’s Preferred Source

The article reveals that Google often prioritizes content from Wired in its search results, due to Wired’s strong reputation and adherence to journalistic standards. This preference is illustrated by Google’s algorithms favoring Wired when multiple quality sources cover the same news, helping users access credible and well-researched information. Wired’s commitment to transparency and thorough reporting supports Google’s aim to deliver reliable content to users, strengthening both brands’ authority in technology and culture journalism. This symbiotic relationship highlights how digital platforms and media outlets collaborate to combat misinformation and enhance information quality online. Ultimately, Google’s elevation of Wired content underscores the importance of trusted sources in the evolving landscape of search engine information delivery.


22. Analog optical computer for AI inference and combinatorial optimization

Summary: In this Nature article, researchers introduce an innovative analog optical computer (AOC) that fuses analog electronics and three-dimensional optics to accelerate both AI inference and combinatorial optimization in a unified platform. By leveraging a fixed-point search abstraction, the AOC operates with no energy-intensive digital conversions, using optics for matrix-vector multiplications and analog electronics for nonlinear operations, subtraction, annealing, and feedback, achieving rapid iterative convergence (~20 ns per loop) and enhanced noise robustness. The system supports emerging iterative neural models like deep-equilibrium networks for image classification and regression, as well as solving quadratic unconstrained mixed optimization (QUMO) problems such as MRI reconstruction and financial transaction settlement. A digital twin (AOC-DT) ensures over 99% fidelity with the physical hardware. Built from consumer-grade components, the AOC demonstrates exceptional energy efficiency, projecting ~500 TOPS per watt at 8-bit precision, i.e., more than 100× superior to modern GPUs—pointing toward a scalable, sustainable path for future computing in AI and optimization.


That’s all for today’s digest for 2025/09/09! We picked, and processed 22 Articles. Stay tuned for tomorrow’s collection of insights and discoveries.

Thanks Patricia Zougheib and Dr Badawi for curating the links

See you in the next one! 🚀

Sam Salhi
https://www.linkedin.com/in/samsalhi

Sr. Program Manager @ Nokia | Engineer, Futurist, CX Advocate, and Technologist | MSc, MBA, PMP | Science & Technology Communicator, Consultant, Innovator, and Entrepreneur