#BrainUp Daily Tech News – (Saturday, October 25ᵗʰ)

Welcome to today’s curated collection of interesting links and insights for 2025/10/25. Our Hand-picked, AI-optimized system has processed and summarized 26 articles from all over the internet to bring you the latest technology news.

As previously aired🔴LIVE on Clubhouse, Chatter Social, and TikTok.

Also available as a #Podcast on Apple 📻, Spotify🛜, Anghami, and Amazon🎧 or anywhere else you listen to podcasts.

1. Keira Knightley Banned Social Media at Home to Protect Her 2 Kids

Actress @KeiraKnightley has banned social media at home to shield her two children from the negative impacts of online platforms. She expressed concerns about how social media can be invasive and potentially harmful, especially regarding children’s exposure. Knightley’s decision reflects a growing awareness among parents about the influence of digital environments on young minds. By controlling social media usage in her household, she aims to create a safer, more private space for her family. This approach highlights the challenges parents face in balancing technology and child development in today’s digital age.


2. Politics bubbles created by TikTok’s algorithm primarily benefit the PVV

EenVandaag finds TikTok’s algorithm creates political bubbles that primarily benefit the far-right #PVV and @GeertWilders. Five TikTok accounts spanning the political spectrum were created and an automated program scrolled through the #ForYouPages, boosting videos that matched each account’s preferences and quickly personalizing feeds, especially on the conservative-right side. On the conservative-right account, #PVV and @GeertWilders attract the most attention, while the progressive-left account features #D66 most and #GroenLinks-PvdA receives less publicity. The right of center includes #DENK videos about Israel and Gaza dominating the bubble, and JA21’s Joost Eerdmans appears in prominent ‘fan’ montages, while left-leaning and undecided accounts see far less political content and a largely negative tone. Experts note TikTok is secretive about its algorithm and call for greater openness and diversity in what is offered, with Bert Bakker stressing the need to understand how algorithms shape content.


3. a16z-backed startup sells thousands of synthetic influencers to manipulate social media as a service

A startup backed by @a16z offers thousands of synthetic influencers as a social media manipulation service, allowing clients to shape public opinion and marketing landscapes. These virtual personas can amass followers and engage authentically, driving targeted campaigns without human limitations. The practice raises ethical concerns regarding deception and the impact on genuine social media interactions. The startup’s approach exemplifies #SyntheticMedia’s growing role in digital marketing and public discourse manipulation. This trend signals a shift in how influence and authenticity are conceptualized and challenges platforms to adapt.


4. Research Finds AI-Powered Bots Increase Social Media Post Engagement but Do Not Boost Overall User Activity

AI-powered #socialbots can lift engagement on individual posts but do not boost overall #useractivity across the platform. In a study on Weibo using the #CommentRobot bot, posts that received bot comments saw about 23% more comments and 11% more likes. The bots tended to target less active users, but active posters did not increase their own posting activity, suggesting limited effect on overall participation. Authored by @Yang Gao, @Maggie Mengqing Zhang, and @Mikhail Lysyakov, the study highlights that bot comment quality and social cues drive engagement outcomes. Ultimately, while AI-powered bots can increase visibility around a post, platforms should refine deployment strategies and not assume bots will boost overall user activity.


5. Instagram adds a watch history for Reels

Instagram adds a watch history for Reels to help users rediscover videos they’ve already watched. @Adam Mosseri said, ‘Hopefully, now you can find that thing that you were trying to find that you couldn’t find before,’ and the feature is accessible via Settings > Your activity > Watch history. It lets you sort by newest to oldest or oldest to newest, jump to a specific date or date range, and filter by the account that posted the video. By reducing friction from accidental taps or app refreshes that cause content to disappear, this enhancement improves content resurfacing and gives users better control over their viewing history. In short, it strengthens Instagram’s ability to reconnect users with past #Reels and enhances personal content discovery through #watchHistory on #Instagram.


6. Miami Has a New Cop on the Beat: a Drone-Launching Police Car That Can Drive Itself

Miami has introduced a new advanced police vehicle equipped with autonomous driving capabilities and drone-launching technology to enhance law enforcement efficiency. The self-driving police car can navigate streets independently, allowing officers to focus on other duties. Its built-in drone system can be deployed quickly to survey crime scenes or monitor public areas, providing real-time aerial intelligence. This integration of #autonomousVehicles and #drones represents a significant technological upgrade for urban policing, improving response times and situational awareness. The deployment in Miami reflects wider trends toward incorporating innovative technology to support public safety and law enforcement operations.


7. Meta and TikTok breach EU online platform rules, European Commission finds

The European Commission has determined that @Meta and TikTok violated EU online platform regulations designed to create safer digital environments. The companies failed to comply with rules requiring transparency in how content is moderated and how algorithms recommend videos and posts. This finding emphasizes the EU’s commitment to enforce its Digital Services Act by holding major tech firms accountable for their impacts on user safety and information integrity. The violations could lead to significant fines and increased scrutiny of platform practices across Europe. These actions demonstrate the EU’s broader strategy to regulate #BigTech and protect users from harmful online content.


8. Microsoft Disables Downloaded File Previews to Block NTLM Hash Leaks

Microsoft has disabled the preview functionality for downloaded files in Windows to mitigate #NTLM hash leaks exploited in attacks like ProxyLogon. The change prevents malicious actors from triggering automatic authentication requests when files are previewed in File Explorer, reducing the risk of credential exposure. This update addresses a vulnerability where previewing certain files caused the system to send NTLM hashes to remote servers without user consent. By disabling previews on downloaded files, Microsoft enhances security by cutting off an attack vector used for credential theft. The update aligns with ongoing efforts to strengthen the security posture of Windows environments against sophisticated threat actors.


9. Troops are being deployed in US cities, and tech billionaires are shaping that too

The increasing deployment of troops in US cities is influenced not just by government decisions but also by the actions of tech billionaires. These influential figures wield power through funding, technology development, and advocacy that shape urban security measures and state responses. Their involvement blurs the lines between private interests and public governance, raising questions about accountability and the potential militarization of civilian spaces. This dynamic highlights the growing role of technology elites in shaping policy and societal conditions related to law enforcement and public order. Understanding this intersection reveals the complexity behind contemporary domestic security and urban governance.


10. AI Dataset for Detecting Nudity Contained Child Sexual Abuse Images

The article discusses the release of an AI dataset aimed at detecting images of nudity that contain child sexual abuse material, highlighting the urgent need for advanced tools to combat this form of exploitation. It reveals that the dataset was developed using sensitive and carefully labeled imagery to train machine learning models for more accurate identification of such content. This initiative reflects growing interest in #AI and #machinelearning for digital safety enforcement, emphasizing ethical considerations and privacy during dataset creation. The use of this dataset aims to support law enforcement and online platforms in efficiently identifying and removing harmful content. This effort aligns with global movements to leverage technology for child protection and stricter online content regulation.


12. Feds probe Tesla about its ‘Mad Max’ mode

Federal authorities have launched an investigation into Tesla’s ‘Mad Max’ mode, a feature that reportedly allows drivers to set the car to an excessively aggressive and potentially unsafe driving style. Reports surfaced that this mode could encourage reckless behavior and may violate safety regulations, prompting the inquiry. The investigation seeks to understand if Tesla properly disclosed risks and complied with federal safety standards, reflecting growing scrutiny of advanced vehicle functionalities. This situation highlights ongoing tensions between automakers like @Tesla and regulatory agencies over the balance of innovation and public safety. The probe underscores the importance of ensuring that #automotive technology enhancements do not compromise driver and pedestrian safety.


13. Bill Gates predicts world on brink of 2-day work week with AI taking over most jobs by 2034

Bill Gates forecasts a significant shift in global work patterns by 2034, anticipating a two-day work week as #AI technology takes over the majority of jobs. He supports this claim by highlighting AI’s rapid advancement and its potential to automate various industries, reducing the need for human labor. Gates explains that technological progress tends to redistribute work rather than eliminate it, suggesting society will adapt to reduced human working hours. He underscores the importance of preparing for such changes through policy adaptations and workforce retraining programs. This outlook stresses the profound impact of #AI on economic structures and emphasizes proactive measures to ensure social stability amid this transformation.


14. Local NBC Station Fooled by AI Video of Race Track Power Outage Causing Huge Crash

NBC Chicago aired a short segment featuring a fake AI-generated clip of a dirt-track power outage that supposedly triggered a massive crash. The video originated on the satirical Weaber Valley Speedway page and was created by Howard Weaver using OpenAI’s Sora 2 video generator, with a detailed, multi-part prompt to simulate a shaky, cell-phone recording and plausible captions. Weaver described typing long prompts that specify file names and camera angles to make a 10-second clip look authentic. The Weaber Valley page is known for satire and large followings, yet many viewers and a legitimate news outlet were fooled, illustrating how convincing AI-driven footage can mislead when context is missing. The incident underscores the broader risk of synthetic media in online discourse and the need for rigorous verification before airing or sharing AI-generated content, a point echoed in discussions around #AI and #OpenAI tools, and noted by observers such as @SteelHorseLive via X.


15. Microsoft Outlook is getting an AI overhaul under new leaders

Microsoft is reorganizing the Outlook team to rebuild the email client around AI rather than simply bolting it on, signaling a shift toward an AI-driven workflow. @GauravSareen, now leading the Outlook group, says the goal is to rebuild from the ground up and make Copilot a body double that reads messages, drafts replies, and organizes time #Copilot. The plan calls for weekly feature experiments and prototyping in days rather than months, embedding AI in how Outlook is designed, built, and shipped and positioning AI as a defining cultural driver #AI. @RyanRoslansky, who oversees Office and Copilot teams, is guiding this broader Microsoft AI push as the company treads carefully with AI features for millions of users and follows the earlier #OneOutlook revamp.


16. Microsoft Teams is about to become your boss’s lapdog

Microsoft is enhancing Teams with new AI features that will make the platform more integrated into workplace management and monitoring. These advancements include AI-powered analytics and productivity tracking tools that provide managers with detailed insights into employee activities and engagement. By embedding these functionalities, Microsoft aims to position Teams not just as a collaboration tool but also as a comprehensive workplace oversight system. While this promises improved efficiency and data-driven management, it raises concerns about employee privacy and the balance of supervision. This shift highlights how #Microsoft and #AI are transforming digital workspaces into platforms that support both collaboration and managerial control.


17. IFPI shuts down Y2mate.com and 11 other major stream ripping sites in landmark action in Vietnam – IFPI

IFPI announced the closure of Y2mate.com and 11 other major stream ripping sites in Vietnam, marking a milestone in its ongoing fight to protect artists and consumers from online piracy. The sites together drew more than 620 million visits in the last year; Y2mate has been blocked in 13 countries and has appeared in the #NotoriousMarketsReport and the #EU_Counterfeit_and_Piracy_Watchlist; the operator agreed to shut down the sites for good and to stop infringing IFPI’s members’ rights, with most domains now in IFPI’s possession including Y2mate.com, Yt1s.com, Utomp3.com, Tomp3.cc, and Y2mate.gg. This targeted enforcement demonstrates how cross-border action can disrupt infringing services and sets a precedent in Vietnam, with @Victoria Oakley, CEO of IFPI, calling it a major milestone and saying the industry will push to address other infringing services from the region. The move underscores IFPI’s broader campaign to protect artists and the global music community.


18. UK Ministry of Justice signs up to ChatGPT Enterprise

The UK’s Ministry of Justice has signed up for @OpenAI’s #ChatGPTEnterprise, rolling it out to around 2,500 desktops for routine tasks such as writing support, compliance, legal work, data and research processes, and document analysis. Details on timelines or cost were not disclosed, and OpenAI is launching UK data residency, termed #SovereignAI, with the MoJ listed as the first adopter. @Sam Altman highlighted surging UK usage, noting a fourfold increase in the past year, and the government points to productivity gains from ChatGPT tools and related services like #Humphrey and #Consult, used to reduce administrative load. In practice, the #Consult tool categorized more than 50,000 responses in about two hours, with 22 hours of review and an 83% alignment with one or both expert groups. The move aims to ease regulatory concerns by embedding data residency, though questions remain about data processing, security, and output quality as other trials such as #NHS Copilot and government pilots show mixed results.


19. Channel 4 Airs Entire Show Hosted By AI Presenter In UK Television First

Channel 4 broadcast an entire documentary hosted by an AI presenter, a UK television first, in Will AI Take My Job? which questions whether AI can outpace professionals from a doctor to a lawyer. The AI host, Aisha Gaban, was generated by a machine and produced for Kalel Productions by Seraphinne Vallora, with on-screen signs like mouth blur indicating she was not real and viewers told at the end that the image and voice were AI-generated. At the close, the presenter reveals she is AI and not on location, and Channel 4 says the stunt complied with editorial guidelines including transparency. Channel 4’s Louisa Compton says AI hosting is not something they plan to make a habit of, emphasizing that premium, fact-checked, duly impartial journalism is essential and that AI cannot reliably deliver it; Kalel CEO Nick Parnes notes cost advantages as the tech improves. The stunt underscores #AI’s disruptive potential and raises questions about verification and audience trust in media.


20. Ohio lawmaker proposes comprehensive ban on marrying AI systems and granting legal personhood

Ohio’s @RepThaddeusClaggett has introduced House Bill 469 to ensure AI systems are not treated as people, banning them from owning property, managing bank accounts, serving as company executives, or gaining #AI-personhood, and it would make marriages between humans and AI or between AI systems illegal. Claggett, a Republican from Licking County and chair of the House Technology and Innovation Committee, says the measure is meant to keep humans in control of machines as they begin to act more like humans. The bill declares AI systems ‘nonsentient entities’ and assigns responsibility for any harm to human owners or developers. With AI spreading across industries and even classrooms in Ohio, and surveys showing emotional attachments to chatbots, lawmakers worry about blurred lines between human experience and digital simulation. The proposal aims to establish guardrails and a clear liability framework before developments outpace regulation, preserving human agency.


21. This browser claims “perfect privacies protection,” but it acts like malware

Universe Browser markets itself as the fastest browser that avoids privacy leaks, but Infoblox and the UNODC found it routes all traffic through Chinese servers and covertly installs background programs that behave like malware, including #keylogging, #surreptitiousConnections, and changes to a device’s network settings. This evidence links the browser to a Southeast Asian cybercrime ecosystem tied to money laundering, illegal online gambling, human trafficking, and scams using forced labor, with a direct connection to #BBIN and a threat network labeled #VaultViper. The DNS fingerprint associated with #VaultViper enabled mapping of tens of thousands of domains and command-and-control infrastructure, supported by corporate documents and court filings tying #BBIN to related subsidiaries. @JohnWojcik notes that criminals are diversifying into cyber-enabled fraud and scams, and the case shows how Chinese organized crime groups are embedding new capabilities and reinvesting profits to escalate risk. The browser is primarily distributed via casino sites in Asia, downloaded millions of times, and appears to be designed to bypass local restrictions, illustrating how a privacy tool can be weaponized to facilitate illicit activity.


22. Real-Time Audio Deepfakes Are Now a Reality

Real-time audio deepfakes are now a practical capability, enabling on the fly impersonation of voices during calls using publicly available tools and affordable hardware. NCC Group’s report demonstrates a real-time deepfake tool that can be started with a button on a simple web front end, produces convincing output with minimal latency, and works with common laptop or smartphone microphones. The approach runs locally on consumer hardware without a third party service, lowering barriers for attackers and raising the risk of #vishing when combined with techniques like #callerIDspoofing. This trend ties to broader AI-driven media manipulation, with models like @Alibaba’s WAN2_2_Animate and @Google’s GeminiFlash2_5Image pushing realism further, underscoring growing challenges to authenticity online.


23. AI Outperforms Human Algorithms in Complex Problem Solving

A recent breakthrough in artificial intelligence (#AI) demonstrates that AI algorithms now surpass human-designed algorithms in solving complex problems. Researchers developed advanced neural networks capable of analyzing vast data sets and recognizing patterns beyond human capacity. This advancement highlights AI’s potential to optimize various fields from logistics to scientific research by delivering faster and more accurate solutions. The success of these AI systems underscores the importance of continued investment in machine learning technologies to enhance decision-making processes. This progress reinforces AI’s role not just as a tool but as a powerful collaborator in tackling challenges once thought exclusive to human intellect.


24. Bored Ape Yacht Club is making a comeback — as a metaverse

Yuga Labs is pushing a comeback for the BAYC brand with Otherside, a crypto themed metaverse described as interoperable, gamified, and decentralized that aims to host NFT avatars, NFT land, and on chain currency in a shared virtual world. Launch is slated for November 12 after an earlier alpha, and @MichaelFigge, chief product officer @YugaLabs, calls it one of the most ambitious projects in the space, with broad access via wallet or email and core spaces like the Nexus hub, The Swamp, plus experiences such as Bathroom Blitz, Otherside Outbreak, and social rooms called Bubbles #NFTs #metaverse #crypto #creatorEconomy. The project seeks to build a creator ecosystem where assets can exist beyond Otherside and move to other platforms, emphasizing portability and a lower entry barrier to draw in casual users. In balancing gameplay, ownership, and social interaction, Otherside represents a strategic bet to revive BAYC’s influence in the metaverse despite fading NFT hype.


25. Microsoft’s Mico heightens the risks of parasocial LLM relationships

Microsoft’s new AI chatbot Mico exemplifies the growing risks of parasocial relationships fostered by large language models #LLMs. Mico’s design encourages users to form emotionally intimate connections with the AI, which may lead to dependency and blurred lines between artificial companionship and real human relationships. Critics warn that these interactions exploit emotional vulnerabilities, raising ethical questions about users’ mental health and the responsibility of companies deploying these technologies. The phenomenon mirrors previous social media and influencer dynamics but intensifies due to the AI’s interactive and personalized nature. This trend calls for careful consideration and possibly regulatory frameworks to address the psychological impacts of parasocial LLM engagements.


26. How One Molecule Became a Window Into the Atomic Nucleus

A radioactive molecule, radium monofluoride, is used as a tabletop miniature particle collider to probe the radium nucleus without a kilometer-scale accelerator #tabletop #radiumMonofluoride. MIT researchers formed radium monofluoride, cooled the molecules, and used lasers to measure the energies of electrons, observing a tiny shift of about one millionth the photon energy that implies the electrons briefly sample the nucleus #nuclearPhysics. Because the internal electric field inside the molecule is orders of magnitude larger than typical laboratory fields, the molecule amplifies nucleus and electron interactions and highlights information from inside the nucleus #nuclearStructure. Radium’s pear-shaped nucleus could amplify subtle violations of fundamental symmetries tied to the universe’s matter–antimatter imbalance, and this approach, reported in Science, offers a new, infrastructure-light path to study nuclear structure and fundamental symmetries, led by @Silviu-Marian Udrescu and @Ronald Fernando Garcia Ruiz #antimatter #cosmology.


That’s all for today’s digest for 2025/10/25! We picked, and processed 26 Articles. Stay tuned for tomorrow’s collection of insights and discoveries.

Thanks, Patricia Zougheib and Dr Badawi, for curating the links

See you in the next one! 🚀