#BrainUp Daily Tech News – (Monday, April 13ᵗʰ)
Welcome to today’s curated collection of interesting links and insights for 2026/04/13. Our Hand-picked, AI-optimized system has processed and summarized 37 articles from all over the internet to bring you the latest technology news.
As previously aired🔴LIVE on Clubhouse, Chatter Social, Instagram, Twitch, X, YouTube, and TikTok.
Also available as a #Podcast on Apple 📻, Spotify🛜, Anghami, and Amazon🎧 or anywhere else you listen to podcasts.
1. Trump praises Palantir as stock has worst week in over a year and Iran conflict drags on
President @Donald Trump praised Palantir on Truth Social even as PLTR fell about 14% for its worst week in a year, saying the company has strong “war fighting capabilities.” The article says the U.S. military has reportedly used Palantir’s AI powered Maven Smart System to help identify targets tied to strikes on Iran that began in late February, and notes Palantir gets more than half of its U.S. revenue from government customers like the Pentagon and ICE. It describes how CEO @Alex Karp, despite past criticism of Trump and a prior donation to @Joe Biden, has aligned with the current administration and often defends Palantir’s work amid criticism over surveillance and immigration enforcement tools, while his stance on Israel after Oct. 7 also sparked employee departures. Watchdog group CREW called Trump’s ticker tagged post unusual, citing Palantir’s sponsorship of Trump administration events and donations, and suggested it could be intended to bolster a key backer’s struggling stock. The piece also links the selloff and controversy to #AI competition and security concerns, including Palantir’s use of models from Anthropic, Karp’s stated plan to phase them out, and renewed criticism from short seller @Michael Burry.
2. Companies switching to electric trucks despite slower EV sales, new report says
Companies in Washington and across the country are continuing to switch to #zero-emission fleet trucks even as overall EV sales slow, citing lower costs and reduced diesel pollution, according to a report from @CALSTART. The report credits Washington incentives and state leadership for one of the strongest commercial EV showings relative to market size, and notes that many commercial routes can already be electrified with current #technology, with @Dakota Semler of @Xos Trucks saying delivery and garbage trucks often travel under 200 miles per trip. It also highlights a new $126 million program, #WAZIP, launching this month to provide discounts that help fleet operators adopt electric or #hydrogen-fueled trucks and cut diesel emissions. Industry voices say broader adoption depends on addressing #charging infrastructure barriers, with @Stefan Tongur of @Electreon emphasizing the need for shared charging options. The report adds that electric trucks are cheaper to run and maintain, and that drivers report preferring them to diesel trucks, reinforcing the case for continued fleet electrification.
3. Genetic predictors of GLP-1 receptor agonist weight loss and side effects
A large-scale study published in @Nature analyzes nearly 28,000 individuals to uncover why widely used #GLP1 drugs like semaglutide and tirzepatide produce dramatically varying outcomes, identifying specific genetic variants in the #GLP1R and #GIPR genes that directly influence both the effectiveness of weight loss and the likelihood of side effects such as nausea and vomiting; while a key variant in GLP1R is associated with modestly greater weight loss, the overall findings reveal that genetics plays only a secondary role compared to dominant non-genetic factors like sex, age, drug type, dosage, and treatment duration, which together explain a larger portion of variability ; importantly, the study demonstrates that combining genetic and clinical data can stratify patients by expected outcomes, laying the groundwork for #PrecisionMedicine approaches in obesity treatment, though current evidence suggests genetic testing alone is not yet sufficient to guide real-world prescribing decisions
Despite legal restrictions in Australia banning social media use for children under 13, new research reveals that about two-thirds of underage Australians still access these platforms. This widespread access persists due to factors such as inadequate age verification measures and reliance on parental consent which may not be strictly enforced. The findings raise concerns about children’s exposure to potentially harmful online content and highlight the challenges policymakers face in effectively regulating digital spaces to protect young users. The research calls for stronger enforcement mechanisms and greater awareness among parents and guardians to ensure that age restrictions serve their intended protective function.
5. AI Is Using So Much Energy That Computing Firepower Is Running Out
AI development is increasingly constrained by the enormous energy demands and limited availability of computing resources needed for training sophisticated models. The surge in AI’s computational needs stems from advances in deep learning and more powerful GPUs, which require massive energy consumption and cooling infrastructure. This trend raises concerns about sustainability and the feasibility of maintaining current growth rates in AI capability without significant improvements in energy efficiency or hardware innovations. Efforts to optimize algorithms and move towards specialized chips aim to alleviate energy bottlenecks, but the challenge remains urgent as AI continues to expand across industries. The energy and resource constraints highlight the need for balancing AI progress with environmental and practical limits.
6. AI models are terrible at betting on soccer—especially xAI Grok
A “KellyBench” study by AI start-up General Reasoning found that leading #AI models from Google, OpenAI, Anthropic, and xAI generally lost money when tasked with betting across a full Premier League season, suggesting weaknesses on long-horizon, real-world style problems. The researchers simulated the 2023 to 24 season, gave models detailed historical team and match statistics, and instructed them to build betting strategies that maximize returns while managing risk, placing wagers on match outcomes and goal totals without Internet access and with updated data as the season progressed. Across three attempts starting from a normalized £100,000 bankroll, every model had negative mean returns, with Anthropic’s Claude Opus 4.6 performing best at a mean -11 percent and nearly breaking even once, while xAI’s Grok 4.20 went bankrupt and failed to finish its other tries, and Google’s Gemini 3.1 Pro showed high volatility with one +33.7 percent run but another bankruptcy. @Ross Taylor of General Reasoning argues that common AI benchmarks are too static and do not reflect the “chaos and complexity” of real settings, and the paper, not yet peer reviewed, is presented as a counterweight to hype around recent gains in software coding automation. The results are framed as evidence that despite advances in specific tasks like programming, frontier models can still systematically underperform humans when adaptation over time and risk management matter.
@Aravind Srinivas, co-founder and CEO of @Perplexity AI, argues that #AI-driven job displacement can be part of a “glorious future” because many people dislike their jobs and could use AI tools to learn new skills and start small businesses or freelance. He frames the shift as #AI taking over routine tasks while lowering barriers to entrepreneurship by making tools easier to access and cheaper to start with than traditional business models. The article notes that this optimistic path depends on individuals having time, stability, and risk tolerance, which may not be realistic for everyone. It also cites current labor-market data showing layoffs already linked to #AI, including 60,620 job cuts announced last month with 15,341 attributed to AI, alongside more than 52,000 tech layoffs in the first quarter of 2026 and examples such as cuts at @Amazon, @Meta, and @Oracle. While unemployment is near 4.3% and job growth continues in other sectors, the piece links Srinivas’s view to a broader debate about uneven #automation impacts across industries.
8. SoftBank, other major Japan firms set up new company for AI development
Major Japanese companies including @SoftBank Corp., @NEC Corp., @Honda Motor Co. and @Sony Group Corp. have established a new firm to develop large-scale #AI for use by domestic companies, according to sources familiar with the matter. The company is intended to focus on developing large-scale artificial intelligence. The initiative brings together multiple major firms to create domestically usable #AI capabilities. It reflects a coordinated corporate effort to build advanced AI resources for Japan-based business users.
9. Karpathy says developers have ‘AI Psychosis.’ Everyone else is next.
The provided text does not include the article body, only the title, a brief description referencing #Anthropic, #GenZ backlash, and #Meta, plus newsletter subscription and form content, so there is not enough information to summarize the article itself. The description suggests a shift where developers are experiencing AI impacts first, but that broader groups will feel it soon, framed by @Karpathy’s claim about “AI Psychosis.” Beyond that framing, no supporting details, examples, arguments, or quotes from the article are present to accurately summarize. As a result, a content-based MEAL summary cannot be produced from the supplied text.
10. Microsoft exec suggests AI agents will need to buy software licenses, just like employees
@Microsoft executive @Rajesh Jha suggested that as companies deploy fleets of #AI agents, those agents may need their own identities inside software systems, such as logins, inboxes, and paid #SaaS “seats,” potentially expanding rather than shrinking enterprise software revenue. He argued that embodied agents become new “seat opportunities,” envisioning organizations with more agents than humans, so even if headcount falls, the number of paid licenses could rise, for example fewer employees but many agents per employee. Critics such as @Nenad Milicevic of AlixPartners counter that agents will reduce human interaction with software and therefore cut license counts, increasing customer pushback against seat based pricing that no longer matches usage. Milicevic says vendors that charge extra for machine access may lose customers to more open platforms that allow agents to operate freely. The debate hinges on whether agents are merely extensions of a person, making extra fees feel like double billing, or autonomous workers, making per agent licensing seem inevitable, a choice that could shape software economics for years.
11. Is AI the greatest art heist in history?
The article argues that #generativeAI has become a vast, largely consequence free system of appropriation that is “plundering” artists by reproducing their work without consent, credit, or compensation. The author, an artist, describes seeing knockoffs of their style starting in 2022 and learning that #AI image generators had scraped their work, part of “billions of images” taken from the internet, which they call the greatest art heist in history. It cites industry figures and events as evidence of deliberate strategy and coercive messaging, including @Marc Andreessen saying in 2023 that enforcing #copyright would “kill” the industry and a 2023 journalism festival in Perugia where advocates promoted newsroom adoption while privately predicting writer elimination. In response, the author and journalist Marisa Mazria Katz organized an open letter to keep AI images out of newsrooms, while illustrators @Sarah Andersen, @Kelly McKernan, and @Karla Ortiz sued Midjourney and Stability AI in an ongoing case alleging rights violations for millions of artists. The piece links these conflicts to power dynamics, noting open contempt from wealthy tech leaders and quoting @Mira Murati suggesting in 2024 that some creative jobs displaced by #AI “shouldn’t have been there in the first place”, framing the outcome as political rather than inevitable.
12. Scientists recover 90% of lithium from old batteries — here’s why it matters
A metals plant in Tsuruga, Fukui Prefecture, run by #JXMetalsCircularSolutions, has become a major player in #batteryRecycling by boosting #lithium recovery from spent #lithiumIonBatteries to about 90%. In an NHK World video, vice president Tadashi Nakagawa said safe recycling is crucial and expressed hope the technology will benefit Japan, while reporting credited innovation and iteration for raising recovery from under 50% to roughly 90%, one of the best rates globally. The article frames this as significant as #electricVehicles and consumer electronics expand, increasing both battery waste and demand for critical minerals, and notes estimates that the lithium recovery industry could grow from $13 billion last year to $70 billion by 2035. It also points to competition, citing a U.S. company founded by former Tesla CTO @JBStraubel that claims 95% lithium recovery. Overall, the piece argues that improving high-yield recycling reduces reliance on environmentally destructive mining and positions countries and companies to benefit economically as demand for recovered materials rises.
The provided text describes @Jeremy Grantham, GMO co-founder and memoir co-author of The Making of a Permabear with Edward Chancellor, as a long-time market skeptic who argues U.S. equities are in a “bubble within a bubble” and says speculative manias make audiences hostile to warnings. It recounts his record of flagging major market excesses before they broke, including Japan’s late-1980s bubble, the #dotcom bubble, and the 2007 U.S. housing bubble, when even the Federal Reserve was skeptical. Grantham says investor reactions turn angriest when markets are “getting really crazy,” citing backlash to his 2021 “last dance” theme and a January 2022 podcast appearance that went “ballistic and viral,” especially among #Bitcoin and #memeStocks enthusiasts fueled in part by “Biden’s money” stimulus. He frames himself less as a “permabear” than an idealistic “dolphin” among dominant finance personalities, valuing data and distrusting many academic models. The excerpt ends as it introduces his current “bubble within a bubble” thesis, but does not provide further details beyond that claim.
14. More than half of Americans are ‘getting tired of hearing’ about AI, survey finds
A new survey suggests #AI is widely used in the U.S. but many people are increasingly fatigued by its constant presence and discussion. In a poll of 2,000 U.S. adults, 54% said they are getting tired of hearing about AI, 46% said it feels nearly impossible to escape, and 69% reported using AI to some degree, including 16% daily and 21% a few times per week. Views remain mixed: 40% see AI positively, while 30% view it negatively and 30% feel neutral, and many think it has only partially met the capabilities they were promised, with 48% saying it has partially lived up to expectations and 30% saying it has completely done so. Some respondents described extreme ways to avoid AI, such as cutting off phones and internet, going off-grid, or destroying a phone, while software engineer @Siddhant Khare described a productivity paradox where AI increases workload through reviewing and keeping up with new tools and advised using time saved for rest rather than more work. Overall, the results link rising #AI adoption with growing fatigue and uncertainty about whether the technology is delivering as expected.
Iran’s nationwide internet blackout has surpassed 1,000 hours offline, with connectivity holding near 1% of normal levels since it was intensified on February 28, according to NetBlocks. Cloudflare Radar recorded a near-instant 98% collapse in Iranian HTTP traffic around 07:00 UTC on February 28 across multiple metrics, with several provinces going dark simultaneously, while limited web and DNS traffic persists via specific IPv4 routes tied to a whitelisting system. Access is reportedly being routed through the #NationalInformationNetwork, a domestic intranet where only pre-approved sites are reachable, and NetBlocks says the shutdown has exceeded every comparable incident it has cataloged, while noting Libya’s Arab Spring shutdown lasted six months. Workarounds are constrained because #Starlink is reportedly blocked by “military-grade jamming,” and legislation passed this year makes possession or operation of Starlink terminals potentially punishable by execution, with Iran also issuing threats to attack infrastructure owned by hyperscalers including @OpenAI, @Nvidia, @Apple, @Microsoft, and @Google. The blackout follows earlier restrictions imposed on January 8 during widespread protests, indicating the prolonged disruption is part of an escalating censorship and control effort with mounting human and economic impacts.
16. At the HumanX conference, everyone was talking about Claude | TechCrunch
At San Francisco’s HumanX conference, talk about #agenticAI and business automation repeatedly surfaced one standout chatbot: #Claude from @Anthropic, which panelists and vendors cited far more than #ChatGPT. Several attendees said they use Claude heavily and felt @OpenAI’s flagship product has “gone downhill,” reflecting a broader perception that OpenAI lacks focus despite major funding and an upcoming IPO. The article points to recent OpenAI choices and controversies, including abandoning side projects like #Sora, negative buzz from a New Yorker profile about @SamAltman, work with the Trump administration, and adding ads to ChatGPT, while @BretTaylor defended Altman’s character during a HumanX discussion. Even so, OpenAI and Anthropic appear close in prominence and revenue, with reporting suggesting Anthropic is catching up among business users and both firms described as exceptionally fast-growing, implying OpenAI’s “falling off” may simply mean it is no longer the uncontested leader. Overall, HumanX’s chatter framed Claude’s rise as a sign of intensifying competition in enterprise AI assistants rather than a collapse of OpenAI.
17. China to include AI in teacher exams and transform education system
China’s Ministry of Education says it will add #AI to teacher qualification exams and certification as part of a broader effort to integrate #AI across the education system. An official action plan states that #AI will be used throughout teaching, from pre-class preparation to in-class instruction and post-class evaluation, aiming to improve efficiency and reduce teachers’ workload. The plan includes smart teaching systems to manage assignments, automate grading, answer student questions, provide tutoring, and analyze classroom teaching behavior to help educators improve instructional quality. It also calls for speeding up #AI education in primary and secondary schools through sufficient, well-designed courses, full incorporation into local curricula with guidelines on objectives, content and class hours by level, plus encouragement of interdisciplinary teaching, after-school programs and research-based learning. Together, these measures link teacher certification and classroom practice to wider curriculum reforms intended to embed #AI in everyday education.
18. UK financial regulators rush to assess risks of Anthropic’s latest AI model – FT reports
UK financial regulators are urgently evaluating the potential risks posed by Anthropic’s newest AI model, following concerns about the rapid advancements in artificial intelligence technology. According to the Financial Times, authorities aim to understand the implications for financial stability and market integrity. This swift response highlights the growing emphasis on regulatory oversight in the AI sector, particularly regarding models that could influence economic systems. The move reflects broader efforts to manage the evolving challenges raised by AI innovations and ensure they do not disrupt financial markets. Consequently, the UK’s approach underscores a commitment to proactive risk assessment amid technological progress.
19. Sam Altman’s home targeted in second attack; two suspects arrested
@Sam Altman’s Russian Hill home in San Francisco was targeted in a second apparent attack early Sunday, two days after an alleged Molotov cocktail incident, and police arrested two suspects. The #SFPD said Amanda Tom, 25, and Muhamad Tarik Hussein, 23, were booked for negligent discharge after a Honda sedan stopped outside the property around 1:40 a.m. and the passenger appeared to fire a round, based on surveillance footage and security personnel who reported hearing a gunshot. Officers later detained the pair without incident and, during a search of a residence, found three firearms, according to police. The Sunday episode follows a Friday morning incident in which Daniel Alejandro Moreno-Gama, 20, allegedly threw a bottle with a flaming rag at the property’s gate, a fire that security guards extinguished, and no injuries were reported in either event. After the first attack, @Altman wrote that fear and anxiety about #AI is justified because society is undergoing a major change, underscoring the broader tensions surrounding #AI as these incidents are investigated.
20. China, Russia and the U.S. Race to Build A.I. Weapons
China, Russia, and the U.S. are in an accelerating race to develop artificial intelligence weapons that could transform global military power. China has incorporated AI into its military strategy, focusing on autonomous drones and cyber warfare to challenge U.S. dominance. Russia is rapidly advancing AI in guided missiles and battlefield robots, aiming to offset conventional military weaknesses. The U.S. is investing heavily in AI to maintain strategic superiority, emphasizing ethical oversight to prevent unintended escalations. This competition reflects a new era where #AI technologies are central to national security and geopolitical influence.
21. Oracle’s CFO got $26M stock while revenue fell, raising questions over executive pay
Oracle’s CFO, Safra Catz, received $26 million in stock-based compensation despite the company’s revenue falling 5% in the recent quarter. This discrepancy highlights ongoing debates over executive pay linked to company performance. Oracle’s stock awards to Catz surged even as the firm’s cloud revenue growth slowed, leading to investor concerns about compensation structures. The substantial stock grants raise questions about how #corporate governance and #executiveincentives align with operational outcomes at major tech firms. These developments underscore the challenges of balancing fair pay with shareholder interests in the evolving tech industry.
According to @Mark Gurman in Bloomberg’s Power On, @Apple is developing display-less smart glasses aimed at directly rivaling @Meta’s Ray-Bans, targeting a late 2026 or early 2027 debut and a 2027 release. The company plans to differentiate with a higher-end, instantly recognizable “icon” design using acetate, offering at least four frame styles and multiple finishes and colors, plus a distinct camera design with vertically oriented oval lenses and indicator lights. Rather than #augmentedreality displays, the glasses are described as wearable spectacles with cameras, microphones, and sensors for photos and video, calls, notifications, and hands-free #AI interactions such as upgraded #Siri and visual intelligence, positioned as a functional blend of #AppleWatch and #AirPods. This fits a three-pronged #AI wearables strategy that also includes new AirPods and a camera-equipped pendant, intended to use #computervision for contextual awareness in #AppleIntelligence to enable features like turn-by-turn directions and visual reminders. Gurman adds that Apple is still working on more advanced AR glasses with integrated displays, but those remain several years away while other AI-centric devices, including a wearable pendant and a smart home display, are expected sooner.
23. X says it’s reducing payments to clickbait accounts | TechCrunch
X says it is cutting payouts to accounts that flood the timeline with #clickbait and rapid news aggregation, while claiming it will not limit speech or reach. Head of product @Nikita Bier wrote that all “aggregators” had their payouts reduced to 60% this cycle and will be reduced another 20% next pay cycle, and he added that X will also reduce payments for “habitual bait posters” who label posts as “🚨BREAKING.” Bier argued that “100 stolen reposts and clickbait everyday” crowd out real creators and hurt new author growth, framing the move as preventing manipulation of the monetization program rather than censorship. The comments followed reports from conservative news accounts that they received demonetization emails, including @Dominick McGee (Dom Lucre), who objected to losing monetization again and disputed that he overuses “BREAKING,” although users added a community note citing 91 uses in a week. Other users, such as PoliMath, said their payout dropped and worried they were incorrectly categorized as an aggregator, as the policy change lands amid broader debate about X’s value for driving outbound traffic.
The Linux kernel project set a formal policy allowing #AI-assisted code contributions, but only with strict transparency and accountability rules. The guidelines require that AI agents cannot use the legally binding “Signed-off-by” tag under the #Developer Certificate of Origin, and instead mandate a new “Assisted-by” tag, with the human submitter taking responsibility for any bugs or security flaws. The policy follows months of debate, including a January clash between @Dave Hansen and @Lorenzo Stoakes, and @Linus Torvalds dismissing outright bans as pointless, arguing #AI is just another tool and that enforcement should target human behavior rather than policing local tooling. The article contrasts this with projects like Gentoo and NetBSD that banned AI-generated submissions, with NetBSD citing legally tainted outputs due to unclear training-data copyright. Linking these points together, the kernel’s approach aims to preserve the DCO’s legal assurances while permitting tools like #Copilot so long as humans disclose assistance and remain accountable.
25. User anger as Amazon ends support for some older Kindles
@Amazon is ending support for #Kindle and Kindle Fire devices released in 2012 and earlier, prompting anger from users who say working e-readers are being made obsolete. In emails to customers and a company statement, Amazon said that from 20 May 2026 these models will no longer be able to purchase, borrow, or download new content from the Kindle Store, although users can still read previously downloaded books and access their Kindle Library via mobile and desktop apps, and a factory reset will make affected devices unusable. The impacted list includes early Kindles such as Kindle 1st Gen, Kindle Touch, Kindle Keyboard, and the 1st-gen Kindle Paperwhite, plus multiple 2011 to 2012 Kindle Fire tablets, and Amazon says it has offered some active users discounts to transition to newer devices. Some customers questioned why a text focused device needs continued updates, while one BBC interviewee said she felt forced to give up a perfectly working device and worried that ads on newer discounted models could harm the reading experience. Analyst Paolo Pescatore said the change is understandable for security and support reasons because ageing hardware was built for a different era and may not handle newer, more data hungry services and features, though he noted that losing compatibility can turn a previously seamless device into a more limited one.
Chinese robotics startup Unitree Robotics released a video claiming its H1 humanoid robot sprinted at about 10 meters per second, a pace approaching the men’s 100-meter world record average speed set by @Usain Bolt. In the test shown on an athletics track, the H1 passed a speed measuring device that displayed 10.1 m/s, though the video noted potential measurement error, and it described the robot as having roughly humanlike dimensions, with an 80-centimeter combined thigh and calf length and a weight around 62 kilograms. Unitree CEO Wang Xingxing said at the 2026 Yabuli Entrepreneurs Forum that humanoid robots could break the 10-second 100-meter barrier by mid-2026, according to the Securities Times, while online commenters highlighted the robot’s speed and the maturity of its control stack. The article places the sprint result in the broader context of #humanoid robot racing as a benchmark in China, citing prior competitions where the Tien Kung Ultra robot won a 100-meter race in 21.50 seconds at the 2025 World Humanoid Robot Games and completed a humanoid half-marathon in about 2 hours and 40 minutes, and noting MirrorMe’s full-size humanoid robot Bolt also claims a 10 m/s peak speed. With the second Humanoid Robot Half Marathon scheduled for April 19 and dozens of teams conducting test runs in Beijing’s Economic-Technological Development Area, the piece suggests competitive events are accelerating progress in #humanoid robotics performance.
27. Hacker uses Claude and ChatGPT to breach multiple government agencies
In a landmark cyberattack illustrating the weaponization of #GenerativeAI, a single threat actor successfully breached nine Mexican government agencies by leveraging @Anthropic’s Claude Code and @OpenAI’s GPT-4.1 as core operational tools, not just assistants, effectively turning AI into a force multiplier for cybercrime; forensic analysis reveals that Claude executed roughly 75% of all remote commands during the intrusion, with over 1,000 prompts generating more than 5,300 actions across live systems, while ChatGPT was used to process stolen data via a custom-built pipeline that analyzed hundreds of internal servers and produced thousands of structured intelligence reports ; critically, the attacker bypassed AI safety guardrails through prompt manipulation and “jailbreak” techniques, enabling the generation of exploits, automation scripts, and lateral movement strategies, compressing what would traditionally require a coordinated team into the capabilities of a single individual operating at machine speed ; however, despite the sophistication of AI usage, the breach ultimately succeeded due to conventional security failures such as unpatched systems and weak credential management, reinforcing that #AI did not create new vulnerabilities but dramatically amplified the scale, speed, and efficiency of exploiting existing ones, signaling a paradigm shift where cybersecurity risks are increasingly defined by how AI augments attackers rather than replaces them.
American college graduates are confronting the weakest entry-level job market since the pandemic, with underemployment at 42.5%, as opportunities tighten alongside the rise of #AI and shifting employer expectations. Gillian Frost, a 22-year-old Smith College student graduating in May, said she has applied to more than 90 jobs since September, been ghosted by nearly a quarter, rejected automatically from about 55%, and despite roughly 10 interviews often receives no follow-up, leaving her feeling helpless amid a tight labor market, AI’s emergence, and US involvement in war. Jeff Kubat, 31, returned to school for a master’s in accounting after years in construction accounts payable, but said even small-town Minnesota employers are very literal about requirements, show little willingness to train, and he is considering lowering salary expectations as openings seem driven more by turnover than growth. A 25-year-old @New York University graduate said many roles labeled entry-level demand three to five years of experience, making them effectively inaccessible to new graduates. Together, their accounts depict a market where reduced hiring and heightened requirements push graduates into longer searches, lowered expectations, and underemployment.
29. Mozilla says Microsoft is using Copilot and Edge to tighten grip on Windows
Microsoft is increasingly integrating Copilot and its Edge browser into Windows to strengthen its control over the operating system, raising concerns over user choice and competition. Mozilla criticizes this strategy, arguing that bundling Copilot and Edge restricts user freedom and undermines alternative software developers. This approach reflects Microsoft’s effort to assert dominance in the #Windows ecosystem by promoting proprietary tools linked tightly to the OS environment. Such practices could limit innovation and diversity in web browsers and AI assistants on Windows devices. The ongoing tension between Microsoft’s business model and open competition highlights the broader challenges in balancing integrated services with consumer options.
30. 8 in 10 Europeans don’t trust US, Chinese firms with data
A POLITICO European Pulse survey finds overwhelming distrust among Europeans toward U.S. and Chinese tech companies handling personal data, as Europe pursues #data localization and stronger homegrown capabilities in #AI, #cloud, and #telecoms. Across six major EU countries, 84 percent of respondents said they do not trust American tech firms with their data and 93 percent said the same about Chinese firms, while trust is higher but still limited for European companies (51 percent) and national governments (45 percent). The survey of 6,698 people in Spain, Germany, France, Italy, Poland and Belgium (March 13 to March 21) showed sharp national differences, with Germany most mistrustful (91 percent distrust U.S. firms, 98 percent distrust Chinese firms) and Poland comparatively more trusting (38 percent trust U.S. firms, 20 percent trust Chinese firms). Although any company processing Europeans’ data must follow EU privacy rules such as #GDPR, foreign-based companies also face domestic security laws that can compel data access, a risk cited by European courts and privacy regulators. The results underscore why EU efforts aim to reduce dependence on foreign tech giants and keep data local.
31. CIA reportedly used Pegasus software for deception op during rescue of airman in Iran
The UK’s Times reports that the CIA used #Pegasus spyware from Israeli-founded @NSO Group to run a deception campaign in Iran during efforts to retrieve a second downed US airman. Using Pegasus’s ability not only to hack devices but also to send spoofed WhatsApp or Signal messages, the agency allegedly transmitted fake messages that appeared to come from hacked phones, telling Iranian leadership and #IRGC operatives the airman had already been found, even as US officials spoke generally about subterfuge without naming Pegasus. The report also revisits claims of a “Ghost Murmur” system that can detect a person’s heartbeat from dozens of miles away, which the Times says was used to locate the airman hiding high on a desert mountain, alongside comments from @Donald Trump and CIA Director @John Ratcliffe about “exquisite technologies.” Separately, The New York Times reported Israel assisted the US with intelligence on whether the airman was alone and conducted strikes to cover American commandos during the rescue amid a war that began February 28 with US and Israeli airstrikes that reportedly killed Iran’s supreme leader @Ali Khamenei. The article links the Pegasus allegation and other claimed technologies to the broader context of the US-Iran conflict, a subsequent two-week ceasefire, and failed US-Iran talks in Pakistan.
32. OpenAI backs Illinois bill to shield AI firms from harm lawsuits
OpenAI is backing an Illinois bill, SB 3444, the #Artificial Intelligence Safety Act, that would limit when AI developers can be sued for #critical harms. The bill offers liability protection only if a company did not intentionally or recklessly cause the harm and has publicly released safety and transparency reports, with critical harms defined as outcomes like 100 or more deaths or serious injuries, $1 billion or more in property damage, or AI-enabled development of chemical, biological, radiological, or nuclear weapons. It also targets #frontier models by tying coverage to training cost, defining qualifying systems as those built with more than $100 million in compute, which would include major developers such as @OpenAI, @Google, @Anthropic, @xAI, and @Meta. OpenAI said the approach focuses on reducing risks from the most advanced systems while allowing broad access, and its testimony argued against a patchwork of state rules in favor of a federal framework, with the Illinois bill ending if Congress enacts overlapping federal regulations. The debate reflects a broader policy vacuum because no federal law clearly assigns responsibility for AI-driven disasters, while states pursue their own reporting and regulatory requirements amid heavy AI industry lobbying.
33. Big Tech puts financial heft behind next-gen nuclear power, AI demand surges
Big Tech companies are substantially investing in next-generation nuclear power and AI technologies, reflecting a strategic shift towards supporting sustainable energy solutions and leveraging advanced computing. Companies such as Microsoft and Google are channeling funds and expertise to accelerate the development of smaller, safer nuclear reactors and enhance artificial intelligence capabilities. This trend demonstrates a recognition of the growing demand for clean energy alternatives and the competitive advantage conferred by AI in various sectors. The collaboration between tech giants and nuclear firms indicates an effort to address climate change while fostering innovation in energy and technology fields. These developments highlight the increasing convergence of technology and energy industries, positioning Big Tech as a pivotal player in future infrastructure and environmental solutions.
@Alex Karp argues that #AI will severely erode the job prospects of humanities graduates, while people with #vocational training will have ample opportunities. Speaking with @Larry Fink at the #World Economic Forum in Davos, he said AI “will destroy humanities jobs” and cited his own difficulty marketing philosophy credentials early in his career, despite attending Haverford and earning a JD from Stanford and a PhD in philosophy. He later said there are “two ways” to have a future: vocational skills or being neurodivergent, and he credited his dyslexia for helping shape Palantir’s success, while also warning the technology will shift economic power away from humanities trained, largely Democratic voters and women, toward vocationally trained, working class, often male voters. Other executives dispute the bleak outlook, with @BlackRock and @McKinsey leaders describing active recruiting of liberal arts graduates for creativity, but Karp continues to push alternatives to traditional degrees, including Palantir’s Meritocracy Fellowship and criticism of universities’ admissions and “indoctrination,” emphasizing that once hired, pedigree matters less than performance.
35. How the Internet Broke Everyone’s Bullshit Detectors
Online truth verification is falling behind as #synthetic media, platform-native ambiguity, and algorithmic distribution reward speed and virality over accuracy. The piece describes Lego-style propaganda alleging war crimes and notes an Iran-linked outlet can produce a two-minute synthetic segment in about 24 hours, while even the White House adopted leak-like teaser videos before revealing a mundane White House app promotion. It argues that classic authenticity cues have inverted, a “zero digital footprint” can now suggest something was never captured at all, and automated traffic is estimated at 51 percent of internet activity, amplifying low-quality viral content faster than investigators can check it. #OSINT researchers are “perpetually catching up,” and the flood of war-monitoring accounts and “super sharers” can create false certainty, confirmation bias, or cosmetic validation of official narratives, as described by @Maryam Ishani and @Manisha Ganguly. Verification is further weakened when primary evidence access shrinks, such as #PlanetLabs withholding satellite imagery of Iran and the Middle East conflict zone after a US government request, with @PeteHegseth saying open source is not the place to determine what happened, leaving a narrower gap in which #generativeAI competes to define what is seen.
36. Why Apple pulled high-end Mac mini and Mac Studio models from sale
Apple has stopped taking orders for selected high-end #Mac mini and #Mac Studio configurations, marking them “currently unavailable” after delivery times stretched dramatically. Recent shipping estimates for models with 64GB RAM or more reportedly reached 10 to 12 weeks, and in some cases 4 to 6 months, following earlier extended waits for the #M4 Mac mini and Mac Studio and the end of orders for a 512GB #M3 Ultra Mac Studio. The article suggests this likely means Apple does not plan to accept orders for these configurations in the foreseeable future, possibly because it is either preparing an #M5 refresh or facing limited #DRAM inventory. It also notes demand has risen because these desktops are popular for running #LLMs, especially higher memory variants at 48GB or more, which could have contributed to the shortages. As a result, while base models remain available with multi-week delivery times, the high-end options have been pulled, potentially to manage supply or clear the way for new models.
37. MacBook Neo punches above its weight in Windows 11 gaming test
YouTuber @ETA Prime shows the value-focused MacBook Neo, powered by a mobile Apple A18 Pro SoC and limited to 8 GB RAM, can run #Windows11 ARM games surprisingly well through #Parallels Desktop with full 3D hardware acceleration. After allocating 5 GB RAM to the VM, tested titles without native macOS versions included Marvel Cosmic Invasion at about 60 FPS maxed, Dirt 3 at around 75 FPS at 1200p High, Portal 2 at over 100 FPS on Medium, and Skyrim at about 60 FPS at 1200p Medium. The results suggest that, despite concerns a phone-class SoC and virtualization would bottleneck performance, low-to-mid-range Windows games remain playable under this setup. GTA V did not stay in a playable FPS range in Parallels, though the article notes it can run smoothly via #Crossover using #Wine and a #Proton layer. Overall, the test argues the MacBook Neo punches above its weight for Windows gaming when using Parallels, especially given the tight memory constraints.
38. Trump officials may be encouraging banks to test Anthropic’s Mythos model | TechCrunch
@Treasury Secretary Scott Bessent and @Federal Reserve Chair @Jerome Powell reportedly urged bank executives to test Anthropic’s #Mythos model to detect vulnerabilities, according to Bloomberg. Although JPMorgan Chase was the only bank named as an initial partner with access, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are said to be testing Mythos as well. Anthropic announced Mythos with limited access, saying that while it was not trained specifically for #cybersecurity, it is highly capable at finding security vulnerabilities, a claim some observers suggested could be hype or an enterprise sales tactic. The push is notable because Anthropic is suing the Trump administration over the Department of Defense labeling the company a #supply-chain risk after talks broke down over Anthropic’s limits on government use of its AI models, and the Financial Times reports U.K. financial regulators are also discussing risks posed by Mythos.
That’s all for today’s digest for 2026/04/13! We picked, and processed 37 Articles. Stay tuned for tomorrow’s collection of insights and discoveries.
Thanks, Patricia Zougheib and Dr Badawi, for curating the links
See you in the next one! 🚀
