Welcome to today’s curated collection of interesting links and insights for 2025/11/03. Our Hand-picked, AI-optimized system has processed and summarized 21 articles from all over the internet to bring you the latest technology news.
As previously aired🔴LIVE on Clubhouse, Chatter Social, and TikTok.
Also available as a #Podcast on Apple 📻, Spotify🛜, Anghami, and Amazon🎧 or anywhere else you listen to podcasts.
AI can empower patients and expand access to information, but its medical advice can be unreliable and may overshadow a physician’s clinical judgment when patients defer to it. Evidence from a clinical case shows a patient who declined a recommended medication after an AI advised against off-label use, and in #addictionmedicine many effective treatments lack FDA approval for that indication, raising questions about when #off-label use is appropriate and how AI intersects with patient care. Safeguards built into models like @OpenAI’s tools and @Meta’s #Llama help prevent harm, yet those safeguards can deteriorate in longer exchanges, and prompts can reinforce harmful thinking or mislead patients. As AI becomes increasingly integrated into health care, doctors must balance potential benefits with patient AI literacy and maintain deliberate clinical oversight, using simple clarifying questions to ensure decisions remain patient-centered while integrating AI thoughtfully into practice.
2. Microsoft AI chief says only biological beings can be conscious
Microsoft AI chief @Mustafa_Suleyman argues that consciousness emerges only in biological beings and that pursuing AI projects that simulate consciousness is misguided. He told CNBC at the AfroTech Conference that asking the wrong question leads to the wrong answer and that it is not work people should be doing. He draws a clear line between smarter machines and real experience, saying AI models don’t feel pain and that the sense of consciousness is a simulation, invoking the idea of #BiologicalNaturalism associated with @John_Searle. Suleyman has long written on AI risks, including The Coming Wave and the essay We must build AI for people; not to be a person. The piece notes the rapid growth of the AI companion market from #Meta and #xAI and situates these debates amid the broader push toward #AGI, with @Sam_Altman acknowledging rapid progress while cautioning about terminology. The overall message underscores the need for responsible AI development and the risk of mislabeling machine intelligence as conscious, guiding the discussion back to how we govern and design AI for people rather than simulating personhood.
3. Palantir Thinks College Might Be a Waste, So It’s Hiring High-School Grads
Palantir Technologies is challenging traditional hiring norms by recruiting high-school graduates, suggesting that college may not be necessary for success in tech careers. The company emphasizes skills and problem-solving abilities over formal education credentials, which allows it to tap into a broader talent pool. This approach reflects a growing trend among tech firms to value practical expertise and on-the-job learning, potentially disrupting conventional pathways to employment. By investing in training and mentorship for younger recruits, Palantir aims to cultivate talent and foster innovation without relying on degree requirements. This strategy aligns with the company’s focus on data analytics and software development, where hands-on experience can be as critical as formal education.
4. Energy Bills on the Rise, Driven in Part by Power-Hungry AI
Energy bills are increasing due to several factors, including the growing power consumption of artificial intelligence (#AI) technologies. AI systems require significant computing power, leading to higher electricity usage that contributes to the rising energy costs for consumers. This trend highlights the broader impact of digital innovation on everyday expenses and the importance of considering energy efficiency in AI development. The rise in energy bills underscores the need for policies and technologies aimed at managing power consumption without stifling technological progress. Understanding this dynamic helps consumers and stakeholders anticipate and adapt to changes in energy demands driven by emerging technologies.
5. SK hynix to become biggest supercycle winner and overtake TSMC in chip profit by 2027: Nomura
SK hynix is positioned to be the biggest winner of the ongoing chip #supercycle, with Nomura Securities forecasting it could overtake #TSMC in chip profits by 2027 as supply remains tight. Nomura raises its forecasts, predicting operating profits of 99 trillion won in 2026 and 128 trillion won in 2027, driven by demand for #HBM chips and other memory for servers, including AI servers and cloud workloads. The brokerage notes that limited cleanroom capacity and long construction lead times will restrain supply until mid-2027, with industry-wide output only accelerating toward late 2027, supporting higher memory and chip prices for #HBM, #DRAM, and #NAND. SK hynix has already secured orders for DRAM and NAND into next year and has #HBM supply deals with key clients including @NVIDIA, with its sixth-generation #HBM4 in mass production and slated for shipment in Q4 2025 and full-scale sales in 2026. The outlook underlines a shift in SK hynix from a pure memory vendor to a core supplier for AI and cloud-server workloads, potentially reshaping profitability dynamics in the global chip industry.
6. Police cameras track billions of license plates per month. Communities are pushing back.
More than 5,000 law enforcement departments across the U.S. use interconnected #ALPRs from @Flock Safety to track residents’ movements, creating a centralized database from cameras, facial-recognition cameras, drones and audio detectors. In Sedona, Sandy Boyce led a community push against the cameras, launching livefreeaz.com to mobilize residents, and on Sept. 9 the City Council unanimously voted to end Sedona’s contract. Across seven states (Arizona, Colorado, New York, Oregon, Tennessee, Texas, Virginia) activists have sought to pause or cancel contracts, joining Austin, Oak Park, Eugene, Springfield, Evanston, Scarsdale and Gig Harbor in removing or pausing Flock access. The effort is cross-partisan, reflecting concerns about privacy and data sharing with the federal government, with supporters ranging from conservatives to progressives, including supporters of @Donald Trump and @Robert F. Kennedy Jr. backers. The article notes that while some contracts have been paused or ended, the trend remains limited relative to Flock’s broader reach in nearly 800 U.S. cities.
7. Republicans Claim Biden Is Censoring YouTube
Republican politicians have accused President Joe Biden and his administration of censoring content on YouTube, alleging coordinated efforts to suppress dissenting views on COVID-19 and related topics. The claims center around supposed government pressure on #socialmedia platforms to remove or limit certain videos, framing it as an attack on free speech. However, these allegations overlook YouTube’s own policies and independent content moderation decisions, which often target misinformation according to company guidelines rather than direct government mandates. Analysts note that conflating platform moderation with government censorship misrepresents the complexities of content governance in the digital age and fuels partisan conflict. This situation illustrates the ongoing tension between regulating misinformation and protecting expression on major #online platforms.
8. Google pulls Gemma from AI Studio after Senator Blackburn accuses model of defamation
Google removed its AI chatbot Gemma from the AI Studio platform following accusations from Senator Marsha Blackburn that the model produced defamatory content. The senator highlighted instances where Gemma allegedly made false statements about individuals, raising concerns about AI ethics and accountability. Google responded by disabling Gemma, emphasizing its commitment to responsible AI development and ongoing improvements to content moderation. This incident underscores the challenges tech companies face in balancing innovation with preventing harmful misinformation in AI tools. Google’s decision reflects a broader industry push to enhance AI safety and respond promptly to public and political scrutiny.
9. White House launches spoof MySpace page mocking Democratic leaders over shutdown
The White House launched a spoof MySpace-style page named MySafeSpace to mock Senate Minority Leader @ChuckSchumer and House Minority Leader @HakeemJeffries over the #governmentshutdown. The page features a sombrero-wearing image of @HakeemJeffries, a parody profile linking to a ‘voting record’ that redirects to a Hill article about Democrats blocking funding, a ‘Hakeem Shutdown Blog’ with links to White House statements, and a list of songs to imply Democrats’ preferences along with a ‘Top 8 Friends’ section that includes @JoeBiden and a photo of Schumer labeled ‘Chucky’. It also includes an edited video placing @HakeemJeffries and @ChuckSchumer at the White House Halloween celebration wearing sombreros and refers to Jeffries as ‘Sombrero Guy’ and ‘Temu Obama’. The piece frames the spoof as part of broader messaging around the funding stalemate, noting that a 60-vote requirement in the Senate blocks a temporary reopening and that Democrats oppose a stopgap without an ACA subsidies extension, linking the tactic to how public perception is shaped during the #AffordableCareAct discussions.
10. Studio Ghibli And Japanese Game Publishers Demand OpenAI Stop Using Their Content In Sora 2
OpenAI’s Sora 2 has prompted a formal demand from CODA and Japanese officials to stop using members’ copyrighted content to train the tool. CODA, representing major publishers such as Bandai Namco, Square Enix, and Studio Ghibli, says a large portion of Sora 2’s outputs closely resemble licensed works and that training without permission may infringe Japanese law, and it asks OpenAI to halt use of such works for ML and to respond to infringement inquiries. OpenAI reportedly offered an opt-out to some studios before Sora 2’s launch, but CODA notes that opt-out may not shield rights-holders under Japan’s copyright system. Government voices, led by Minoru Kiuchi and Digital Minister Masaaki Taira, warn that if voluntary compliance fails, authorities could investigate AI usage under the AI Promotion Act, which took full effect in September 2025. The case underscores the tension between rapid AI capabilities and IP protection in Japan, highlighting how rights-holders and the government expect safeguards and clear permissions as OpenAI navigates responses, with @Sam Altman and the company weighing next steps.
11. Reddit’s Cofounder And CEO Steve Huffman Is Now A Billionaire
Reddit cofounder and CEO @Steve Huffman has joined the billionaire ranks thanks to strong ad sales and profits, and he now aims to become invaluable to #AI companies. The company posted five straight quarters of profitability, with Q3 net income of $163 million, and Reddit’s stock rally has lifted Huffman’s net worth to about $1.2 billion, driven by 3.1 million shares (roughly 2.3%), plus $190 million in cash and other investments. Huffman’s ascent follows years of losses after the 2006 sale to Condé Nast; he and cofounder @Alexis Ohanian returned to Reddit, with Huffman rejoining in 2015 to steer the company through crises as daily active users roughly doubled since 2023 and organic search referrals surged. He says Reddit is “for humans by humans” and positions the platform as a more authentic information source amid AI-driven content, signaling potential collaborations with AI firms. The article frames this milestone as the culmination of Reddit’s monetization journey and a strategic shift toward AI partnerships, while continuing to prioritize authentic, human-driven recommendations for users and advertisers.
12. Could the internet go offline? Inside the fragile system holding the modern world together
A single, brittle constellation of decades‑old protocols and physical infrastructure underpins a web of services, suggesting the internet could fail in a cascade of outages if key hubs are damaged or misconfigured. The piece sketches a worst‑case “big one,” such as a summertime tornado devastating Google’s us-central1 datacentre cluster in Council Bluffs, Iowa, taking down YouTube, Gmail and other services while government functions slow and people revert to offline routines. It also notes the concentration of critical capacity in a few providers and hubs, #AWS, #Google, #datacentres, increasing vulnerability to weather, outages, and a misbehaving line of code in #DNS that could trigger a cascading crash. The real fear is a sudden, snowballing error in the decades‑old protocols that underlie the internet, which would test whether two networked devices separated by a router could still communicate when infrastructure proves unreliable. Experts @Michał “rysiek” Woźniak and @Steven Murdoch emphasize that despite resilience, the system’s economics favor centralization, making outages feel inevitable at scale, a reality that looks far less distant in everyday life than many would like.
13. Taiwan unveils undersea cable security initiative – Taipei Times
Taiwan launches the Management Initiative on International Undersea Cables, a global partnership named #RISK to improve security of undersea cables through risk mitigation, information sharing, systemic reform and knowledge building, inviting stakeholders to join a platform that aims to align standards and promote best practices. At a seminar in Taipei, @Lin Chia-lung says the project is not a national project but a global partnership and wants to secure a future where data flows freely and securely and where connectivity is treated as a public good rather than a geopolitical weapon. MEP @Rihards Kols calls cables the nervous system of democratic connectivity and notes more than 600 operational or planned cables worldwide covering about 1.5 million kilometers, with 12 incidents since 2023 in the Baltic region that he characterizes as sabotage. He envisions EU-Taiwan cooperation using drone technologies to monitor cables, and frames the initiative as a natural base for research, innovation and exchange of best practices, coinciding with the EU’s Readiness 2030 strategy. Taiwan’s geography as a digital infrastructure hub underlines the need for resilience through planning, implementation and cooperation, reinforcing the message that this initiative seeks global participation to safeguard connectivity.
14. Excessive screen time among youth may pose heart health risks
Excessive screen time among children and adolescents is linked to higher cardiometabolic risk, with the strongest association seen in those who sleep less. In two Danish cohorts totaling more than 1,000 participants, each additional hour of recreational screen time was associated with about a 0.08 standard-deviation rise in a composite cardiometabolic score among 10-year-olds and about 0.13 SD among 18-year-olds, reflecting higher blood pressure, triglycerides and insulin resistance. The score draws on waist size, blood pressure, HDL, triglycerides and glucose and is adjusted for age and sex; the researchers note that a three- to six-hour daily increase in screen time could meaningfully raise risk across a population. Lead author @David Horner and colleagues say limiting discretionary screen time in childhood and adolescence may protect long-term heart and metabolic health, a message that screen habits early in life can influence later health #screenTime #sleep #cardiometabolicRisk. The findings align with the American Heart Association’s view that cardiometabolic risk is accruing at younger ages and that only a minority of youth had favorable health in recent NHANES data, underscoring the need for balanced daily routines.
15. Two Windows vulnerabilities, one a 0-day, are under active exploitation
Two Windows vulnerabilities, one a zero-day currently tracked as CVE-2025-9491 and the other a critical flaw in Windows Server Update Services, are under active exploitation in broad campaigns. Evidence shows the zero-day has been exploited since 2017 by as many as 11 different APT groups, per @Trend Micro, with operations spanning nearly 60 countries and targeting regions including the US, Canada, Russia, and Korea; the exploit culminates in a PlugX payload and keeps the binary encrypted with RC4. Analysts note that the WSUS flaw CVE-2025-59287 emerged after an incomplete patch on Oct Patch Tuesday and has driven a wave of exploitation against internet-facing WSUS servers since Oct 23, observed by Huntress, Eye, and Sophos. Mitigations focus on blocking or restricting .lnk file usage from untrusted origins by disabling automatic resolution in Windows Explorer, underscoring the need for defense-in-depth against RCE and @APTs #PlugX #RC4 #WSUS.
16. Our devices work for Big Tech, not us — Financial Times (@TimHarford)
@TimHarford argues that the modern obsession with digital productivity tools serves #BigTech more than it serves their users. Drawing on research by @GloriaMark, he notes that the average person switches screens every 47 seconds, trapped in an endless loop of notifications, micro-tasks, and algorithmic distractions. The article defends the much-mocked $100 Analog paper to-do system as a symbol of rebellion against digital overload, suggesting that physical tools, like index cards and notebooks, restore focus and autonomy. Harford explains that the best productivity practices are low-tech: writing things down, maintaining a short priority list, and managing tasks decisively. While digital calendars and apps can help, he concludes that technology’s multitasking design fundamentally conflicts with human concentration, making paper systems ironically more “advanced” for deep work.
Digital Minimalism
Digital minimalism is a philosophy and lifestyle that advocates using technology deliberately and sparingly to support one’s values and goals rather than letting it dictate attention. Popularized by @CalNewport, the concept emphasizes intentionality, boundaries, and the replacement of shallow digital habits with meaningful, focused work. It does not reject technology outright but calls for designing one’s digital environment to minimize distraction and maximize agency, turning tools into servants rather than masters.
17. Audrey Tang, hacker and Taiwanese digital minister: ‘AI is a parasite that fosters polarization’
@Audrey Tang, the hacker and Taiwan’s minister of digital affairs, uses her position and her public profile to steer the internet toward empowering citizens and renewing democracy, even as she warns that ‘AI is a parasite that fosters polarization.’ She has won the 2025 Right Livelihood Award for advancing the social use of digital technology to empower citizens, renew democracy and heal divides. In Taiwan she declared broadband a human right, built a real-time mask map during the COVID-19 pandemic, and led campaigns against #disinformation and #deepfakes, while creating the #ROOST system to help detect child sexual abuse on platforms like Bluesky and Roblox. Her international work includes helping California Governor Gavin Newsom create Engaged California and collaborating with Japan, where @Takahiro_Anno read Tang’s Plurality, built a platform based on her principles and won more than 1% of the vote, later founding Team Mirai. Tang’s approach shows technology can reframe governance as a citizen-led enterprise, aiming to rebuild trust, reduce polarization and give people a direct role in policymaking, a mission that the 2025 Right Livelihood Award underscores.
18. Does the US military need a Cyber Force?
The United States faces a talent drain in cyberspace and a fragmented approach across the services, raising the question of whether a dedicated #CYBERFORCE is needed. About 225,000 personnel, civilians, and contractors work in cyber roles across the DoD, but training and terminology vary by service, inhibiting CYBERCOM’s modular Cyber Mission Force that should let teams from different services swap in and out. Think tanks such as @Aden Magee note that CYBERCOM’s structure and the services’ divergent planning create an incoherent strategy, a concern echoed by RAND and #CSIS which report recruitment and readiness gaps. Adversaries like #China, #Russia, and #Iran have breached sensitive systems, including F-35 designs and the personal data of millions of Americans, underscoring the risk of a disjointed cyber posture. The article contrasts CYBERCOM with #SOCOM, which has been more proactive in defining requirements, suggesting that a unified or new organizational model could improve cyber effectiveness.
19. YouTube videos about bypassing Windows 11 restrictions now playing ads
YouTube has started showing ads on videos that demonstrate methods to bypass Windows 11 hardware restrictions, such as TPM 2.0 and CPU requirements. Historically, these videos avoided monetization due to policy violations concerning ‘deceptive practices’ or ‘content that facilitates dishonest behavior.’ The shift illustrates YouTube’s updated stance or algorithmic handling of such content, allowing creators to earn revenue despite the nature of the content. This change impacts how #Windows11 users learn about unsupported upgrades and potentially encourages more widespread firmware or configuration modifications. Overall, this development reflects YouTube’s balancing act between enforcing guidelines and supporting content that informs users about current PC technology limitations.
Utah and California are implementing laws requiring businesses to disclose when artificial intelligence is used for customer interactions like chatbots or image creation, enhancing transparency for consumers. The new regulations aim to address growing concerns about AI-generated content and its impact on trust and authenticity in digital communications. For example, California’s law mandates clear warnings about AI usage, specifically targeting generative AI technologies that produce images and text. This move reflects increasing government efforts to regulate #AI deployment to ensure consumers are informed and protected from potential deception. As AI becomes more integrated into business operations, these requirements encourage ethical use and accountability in automated engagements.
21. Meta: Pirated Adult Film Downloads Were for Personal Use, Not AI Training
Meta clarified that the pirated adult films it downloaded were intended for personal use by its employees, not for training artificial intelligence models. The company responded to claims linking its content acquisition to AI training, emphasizing that the downloads were limited to individuals and not part of any data collection strategy for AI enhancement. This distinction addresses concerns over unauthorized use of copyrighted materials for AI development. Meta’s stance highlights the importance of transparency about data sources in AI training contexts. Their statement seeks to separate employee behavior from company practices to alleviate legal and ethical scrutiny.
That’s all for today’s digest for 2025/11/03! We picked, and processed 21 Articles. Stay tuned for tomorrow’s collection of insights and discoveries.
Thanks, Patricia Zougheib and Dr Badawi, for curating the links
See you in the next one! 🚀