OrangeBot.AI Digest — 2026-03-03
85 headlines across 8 sources, aggregated for this day.
Hacker News(15)
- Iran War Cost Tracker (iran-cost-ticker.com)
- GitHub Is Having Issues (www.githubstatus.com)
- Intel's make-or-break 18A process node debuts for data center with 288-core Xeon (www.tomshardware.com)
- GPT‑5.3 Instant (openai.com)
- Physics Girl: Super-Kamiokande – Imaging the sun by detecting neutrinos [video] (www.youtube.com)
- MacBook Air with M5 (www.apple.com)
- Don't become an engineering manager (newsletter.manager.dev)
- I'm reluctant to verify my identity or age for any online services (neilzone.co.uk)
- Apple Studio Display and Studio Display XDR (www.apple.com)
- Claude's Cycles [pdf] (www-cs-faculty.stanford.edu)
- MacBook Pro with new M5 Pro and M5 Max (www.apple.com)
- Most-read tech publications have lost over half their Google traffic since 2024 (growtika.com)
- I'm losing the SEO battle for my own open source project (twitter.com)
- India's top court angry after junior judge cites fake AI-generated orders (www.bbc.com)
- Mullvad VPN: Banned TV Ad in the Streets of London [video] (www.youtube.com)
GitHub Trending(10)
Product Hunt(15)
- getviktor.com
Your AI Coworker that proactively executes tasks
- Krisp Accent Conversion
Understand accented speech in real time
- The Bias
The synthesis engine for multi-perspective news
- Deep Personality
Science-backed personality insights for you and your partner
- Continue (Mission Control)
Quality control for your software factory
- Lavalier AI
Interview Intelligence to make confident hiring decisions
- MonoDesk
For designers, web/video creators who'd rather be creating
- Springfield Oracle
Every Simpsons prediction sourced, scored, fact-checked
- Tomosu
Lock your apps by default to improve your digital wellbeing
- Better Clipboard
Smarter copy-paste for macOS. New version!
- Shavely
Group chat where every message speaks your language
- Sequirly
Prevent accidental data leaks while using AI tools
- Secret Sauce 3D
AI tool suite for professional 3D artists
- Skyvern MCP & Skills
Let Claude code and Open Claw automate the web
- GHOSTYPE
The AI voice interface that learns your style.
Hugging Face(15)
- UniG2U-Bench: Do Unified Models Advance Multimodal Understanding?
Unified multimodal models have recently demonstrated strong generative capabilities, yet whether and when generation improves understanding remains unclear. Existing benchmarks lack a systematic exploration of the specific tasks where generation facilitates understanding. To this end, we introduce UniG2U-Bench, a comprehensive benchmark categorizing generation-to-understanding (G2U) evaluation into 7 regimes and 30 subtasks, requiring varying degrees of implicit or explicit visual transformations. Extensive evaluation of over 30 models reveals three core findings: 1) Unified models generally underperform their base Vision-Language Models (VLMs), and Generate-then-Answer (GtA) inference typically degrades performance relative to direct inference. 2) Consistent enhancements emerge in spatial intelligence, visual illusions, or multi-round reasoning subtasks, where enhanced spatial and shape perception, as well as multi-step intermediate image states, prove beneficial. 3) Tasks with similar reasoning structures and models sharing architectures exhibit correlated behaviors, suggesting that generation-understanding coupling induces class-consistent inductive biases over tasks, pretraining data, and model architectures. These findings highlight the necessity for more diverse training data and novel paradigms to fully unlock the potential of unified multimodal modeling.
- Beyond Language Modeling: An Exploration of Multimodal Pretraining
The visual world offers a critical axis for advancing foundation models beyond language. Despite growing interest in this direction, the design space for native multimodal models remains opaque. We provide empirical clarity through controlled, from-scratch pretraining experiments, isolating the factors that govern multimodal pretraining without interference from language pretraining. We adopt the Transfusion framework, using next-token prediction for language and diffusion for vision, to train on diverse data including text, video, image-text pairs, and even action-conditioned video. Our experiments yield four key insights: (i) Representation Autoencoder (RAE) provides an optimal unified visual representation by excelling at both visual understanding and generation; (ii) visual and language data are complementary and yield synergy for downstream capabilities; (iii) unified multimodal pretraining leads naturally to world modeling, with capabilities emerging from general training; and (iv) Mixture-of-Experts (MoE) enables efficient and effective multimodal scaling while naturally inducing modality specialization. Through IsoFLOP analysis, we compute scaling laws for both modalities and uncover a scaling asymmetry: vision is significantly more data-hungry than language. We demonstrate that the MoE architecture harmonizes this scaling asymmetry by providing the high model capacity required by language while accommodating the data-intensive nature of vision, paving the way for truly unified multimodal models.
- Utonia: Toward One Encoder for All Point Clouds
We dream of a future where point clouds from all domains can come together to shape a single model that benefits them all. Toward this goal, we present Utonia, a first step toward training a single self-supervised point transformer encoder across diverse domains, spanning remote sensing, outdoor LiDAR, indoor RGB-D sequences, object-centric CAD models, and point clouds lifted from RGB-only videos. Despite their distinct sensing geometries, densities, and priors, Utonia learns a consistent representation space that transfers across domains. This unification improves perception capability while revealing intriguing emergent behaviors that arise only when domains are trained jointly. Beyond perception, we observe that Utonia representations can also benefit embodied and multimodal reasoning: conditioning vision-language-action policies on Utonia features improves robotic manipulation, and integrating them into vision-language models yields gains on spatial reasoning. We hope Utonia can serve as a step toward foundation models for sparse 3D data, and support downstream applications in AR/VR, robotics, and autonomous driving.
- BeyondSWE: Can Current Code Agent Survive Beyond Single-Repo Bug Fixing?
Current benchmarks for code agents primarily assess narrow, repository-specific fixes, overlooking critical real-world challenges such as cross-repository reasoning, domain-specialized problem solving, dependency-driven migration, and full-repository generation. To address this gap, we introduce BeyondSWE, a comprehensive benchmark that broadens existing evaluations along two axes - resolution scope and knowledge scope - using 500 real-world instances across four distinct settings. Experimental results reveal a significant capability gap: even frontier models plateau below 45% success, and no single model performs consistently across task types. To systematically investigate the role of external knowledge, we develop SearchSWE, a framework that integrates deep search with coding abilities. Our experiments show that search augmentation yields inconsistent gains and can in some cases degrade performance, highlighting the difficulty of emulating developer-like workflows that interleave search and reasoning during coding tasks. This work offers both a realistic, challenging evaluation benchmark and a flexible framework to advance research toward more capable code agents.
- Beyond Length Scaling: Synergizing Breadth and Depth for Generative Reward Models
Recent advancements in Generative Reward Models (GRMs) have demonstrated that scaling the length of Chain-of-Thought (CoT) reasoning considerably enhances the reliability of evaluation. However, current works predominantly rely on unstructured length scaling, ignoring the divergent efficacy of different reasoning mechanisms: Breadth-CoT (B-CoT, i.e., multi-dimensional principle coverage) and Depth-CoT (D-CoT, i.e., substantive judgment soundness). To address this, we introduce Mix-GRM, a framework that reconfigures raw rationales into structured B-CoT and D-CoT through a modular synthesis pipeline, subsequently employing Supervised Fine-Tuning (SFT) and Reinforcement Learning with Verifiable Rewards (RLVR) to internalize and optimize these mechanisms. Comprehensive experiments demonstrate that Mix-GRM establishes a new state-of-the-art across five benchmarks, surpassing leading open-source RMs by an average of 8.2\%. Our results reveal a clear divergence in reasoning: B-CoT benefits subjective preference tasks, whereas D-CoT excels in objective correctness tasks. Consequently, misaligning the reasoning mechanism with the task directly degrades performance. Furthermore, we demonstrate that RLVR acts as a switching amplifier, inducing an emergent polarization where the model spontaneously allocates its reasoning style to match task demands. The synthesized data and models are released at https://huggingface.co/collections/DonJoey/mix-grm{Hugging Face}, and the code is released at https://github.com/Don-Joey/Mix-GRM{Github}.
- How Controllable Are Large Language Models? A Unified Evaluation across Behavioral Granularities
Large Language Models (LLMs) are increasingly deployed in socially sensitive domains, yet their unpredictable behaviors, ranging from misaligned intent to inconsistent personality, pose significant risks. We introduce SteerEval, a hierarchical benchmark for evaluating LLM controllability across three domains: language features, sentiment, and personality. Each domain is structured into three specification levels: L1 (what to express), L2 (how to express), and L3 (how to instantiate), connecting high-level behavioral intent to concrete textual output. Using SteerEval, we systematically evaluate contemporary steering methods, revealing that control often degrades at finer-grained levels. Our benchmark offers a principled and interpretable framework for safe and controllable LLM behavior, serving as a foundation for future research.
- Kling-MotionControl Technical Report
Character animation aims to generate lifelike videos by transferring motion dynamics from a driving video to a reference image. Recent strides in generative models have paved the way for high-fidelity character animation. In this work, we present Kling-MotionControl, a unified DiT-based framework engineered specifically for robust, precise, and expressive holistic character animation. Leveraging a divide-and-conquer strategy within a cohesive system, the model orchestrates heterogeneous motion representations tailored to the distinct characteristics of body, face, and hands, effectively reconciling large-scale structural stability with fine-grained articulatory expressiveness. To ensure robust cross-identity generalization, we incorporate adaptive identity-agnostic learning, facilitating natural motion retargeting for diverse characters ranging from realistic humans to stylized cartoons. Simultaneously, we guarantee faithful appearance preservation through meticulous identity injection and fusion designs, further supported by a subject library mechanism that leverages comprehensive reference contexts. To ensure practical utility, we implement an advanced acceleration framework utilizing multi-stage distillation, boosting inference speed by over 10x. Kling-MotionControl distinguishes itself through intelligent semantic motion understanding and precise text responsiveness, allowing for flexible control beyond visual inputs. Human preference evaluations demonstrate that Kling-MotionControl delivers superior performance compared to leading commercial and open-source solutions, achieving exceptional fidelity in holistic motion control, open domain generalization, and visual quality and coherence. These results establish Kling-MotionControl as a robust solution for high-quality, controllable, and lifelike character animation.
- PRISM: Pushing the Frontier of Deep Think via Process Reward Model-Guided Inference
DEEPTHINK methods improve reasoning by generating, refining, and aggregating populations of candidate solutions, which enables strong performance on complex mathematical and scientific tasks. However, existing frameworks often lack reliable correctness signals during inference, which creates a population-enhancement bottleneck where deeper deliberation amplifies errors, suppresses correct minority solutions, and yields weak returns to additional compute. In this paper, we introduce a functional decomposition of DEEPTHINK systems and propose PRISM, a Process Reward Model (PRM)-guided inference algorithm that uses step-level verification to guide both population refinement and solution aggregation. During refinement, PRISM treats candidate solutions as particles in a PRM-defined energy landscape and reshapes the population through score-guided resampling and stochastic refinement, which concentrates probability mass on higher-quality reasoning while preserving diversity. Across mathematics and science benchmarks, PRISM is competitive with or outperforms existing DEEPTHINK methods, reaching 90.0%, 75.4%, and 71.4% with gpt-oss-20b on AIME25, HMMT25, and GPQA Diamond, respectively, while matching or exceeding gpt-oss-120b. Additionally, our analysis shows that PRISM produces consistent net-directional correction during refinement, remains reliable when the initial population contains few correct candidates, and often lies on the compute-accuracy Pareto frontier.
- Kiwi-Edit: Versatile Video Editing via Instruction and Reference Guidance
Instruction-based video editing has witnessed rapid progress, yet current methods often struggle with precise visual control, as natural language is inherently limited in describing complex visual nuances. Although reference-guided editing offers a robust solution, its potential is currently bottlenecked by the scarcity of high-quality paired training data. To bridge this gap, we introduce a scalable data generation pipeline that transforms existing video editing pairs into high-fidelity training quadruplets, leveraging image generative models to create synthesized reference scaffolds. Using this pipeline, we construct RefVIE, a large-scale dataset tailored for instruction-reference-following tasks, and establish RefVIE-Bench for comprehensive evaluation. Furthermore, we propose a unified editing architecture, Kiwi-Edit, that synergizes learnable queries and latent visual features for reference semantic guidance. Our model achieves significant gains in instruction following and reference fidelity via a progressive multi-stage training curriculum. Extensive experiments demonstrate that our data and architecture establish a new state-of-the-art in controllable video editing. All datasets, models, and code is released at https://github.com/showlab/Kiwi-Edit.
- Qwen3-Coder-Next Technical Report
We present Qwen3-Coder-Next, an open-weight language model specialized for coding agents. Qwen3-Coder-Next is an 80-billion-parameter model that activates only 3 billion parameters during inference, enabling strong coding capability with efficient inference. In this work, we explore how far strong training recipes can push the capability limits of models with small parameter footprints. To achieve this, we perform agentic training through large-scale synthesis of verifiable coding tasks paired with executable environments, allowing learning directly from environment feedback via mid-training and reinforcement learning. Across agent-centric benchmarks including SWE-Bench and Terminal-Bench, Qwen3-Coder-Next achieves competitive performance relative to its active parameter count. We release both base and instruction-tuned open-weight versions to support research and real-world coding agent development.
- Track4World: Feedforward World-centric Dense 3D Tracking of All Pixels
Estimating the 3D trajectory of every pixel from a monocular video is crucial and promising for a comprehensive understanding of the 3D dynamics of videos. Recent monocular 3D tracking works demonstrate impressive performance, but are limited to either tracking sparse points on the first frame or a slow optimization-based framework for dense tracking. In this paper, we propose a feedforward model, called Track4World, enabling an efficient holistic 3D tracking of every pixel in the world-centric coordinate system. Built on the global 3D scene representation encoded by a VGGT-style ViT, Track4World applies a novel 3D correlation scheme to simultaneously estimate the pixel-wise 2D and 3D dense flow between arbitrary frame pairs. The estimated scene flow, along with the reconstructed 3D geometry, enables subsequent efficient 3D tracking of every pixel of this video. Extensive experiments on multiple benchmarks demonstrate that our approach consistently outperforms existing methods in 2D/3D flow estimation and 3D tracking, highlighting its robustness and scalability for real-world 4D reconstruction tasks.
- NOVA: Sparse Control, Dense Synthesis for Pair-Free Video Editing
Recent video editing models have achieved impressive results, but most still require large-scale paired datasets. Collecting such naturally aligned pairs at scale remains highly challenging and constitutes a critical bottleneck, especially for local video editing data. Existing workarounds transfer image editing to video through global motion control for pair-free video editing, but such designs struggle with background and temporal consistency. In this paper, we propose NOVA: Sparse Control \& Dense Synthesis, a new framework for unpaired video editing. Specifically, the sparse branch provides semantic guidance through user-edited keyframes distributed across the video, and the dense branch continuously incorporates motion and texture information from the original video to maintain high fidelity and coherence. Moreover, we introduce a degradation-simulation training strategy that enables the model to learn motion reconstruction and temporal consistency by training on artificially degraded videos, thus eliminating the need for paired data. Our extensive experiments demonstrate that NOVA outperforms existing approaches in edit fidelity, motion preservation, and temporal coherence.
- APRES: An Agentic Paper Revision and Evaluation System
Scientific discoveries must be communicated clearly to realize their full potential. Without effective communication, even the most groundbreaking findings risk being overlooked or misunderstood. The primary way scientists communicate their work and receive feedback from the community is through peer review. However, the current system often provides inconsistent feedback between reviewers, ultimately hindering the improvement of a manuscript and limiting its potential impact. In this paper, we introduce a novel method APRES powered by Large Language Models (LLMs) to update a scientific papers text based on an evaluation rubric. Our automated method discovers a rubric that is highly predictive of future citation counts, and integrate it with APRES in an automated system that revises papers to enhance their quality and impact. Crucially, this objective should be met without altering the core scientific content. We demonstrate the success of APRES, which improves future citation prediction by 19.6% in mean averaged error over the next best baseline, and show that our paper revision process yields papers that are preferred over the originals by human expert evaluators 79% of the time. Our findings provide strong empirical support for using LLMs as a tool to help authors stress-test their manuscripts before submission. Ultimately, our work seeks to augment, not replace, the essential role of human expert reviewers, for it should be humans who discern which discoveries truly matter, guiding science toward advancing knowledge and enriching lives.
- SGDC: Structurally-Guided Dynamic Convolution for Medical Image Segmentation
Spatially variant dynamic convolution provides a principled approach of integrating spatial adaptivity into deep neural networks. However, mainstream designs in medical segmentation commonly generate dynamic kernels through average pooling, which implicitly collapses high-frequency spatial details into a coarse, spatially-compressed representation, leading to over-smoothed predictions that degrade the fidelity of fine-grained clinical structures. To address this limitation, we propose a novel Structure-Guided Dynamic Convolution (SGDC) mechanism, which leverages an explicitly supervised structure-extraction branch to guide the generation of dynamic kernels and gating signals for structure-aware feature modulation. Specifically, the high-fidelity boundary information from this auxiliary branch is fused with semantic features to enable spatially-precise feature modulation. By replacing context aggregation with pixel-wise structural guidance, the proposed design effectively prevents the information loss introduced by average pooling. Experimental results show that SGDC achieves state-of-the-art performance on ISIC 2016, PH2, ISIC 2018, and CoNIC datasets, delivering superior boundary fidelity by reducing the Hausdorff Distance (HD95) by 2.05, and providing consistent IoU gains of 0.99\%-1.49\% over pooling-based baselines. Moreover, the mechanism exhibits strong potential for extension to other fine-grained, structure-sensitive vision tasks, such as small-object detection, offering a principled solution for preserving structural integrity in medical image analysis. To facilitate reproducibility and encourage further research, the implementation code for both our SGE and SGDC modules has been is publicly released at https://github.com/solstice0621/SGDC.
- HateMirage: An Explainable Multi-Dimensional Dataset for Decoding Faux Hate and Subtle Online Abuse
Subtle and indirect hate speech remains an underexplored challenge in online safety research, particularly when harmful intent is embedded within misleading or manipulative narratives. Existing hate speech datasets primarily capture overt toxicity, underrepresenting the nuanced ways misinformation can incite or normalize hate. To address this gap, we present HateMirage, a novel dataset of Faux Hate comments designed to advance reasoning and explainability research on hate emerging from fake or distorted narratives. The dataset was constructed by identifying widely debunked misinformation claims from fact-checking sources and tracing related YouTube discussions, resulting in 4,530 user comments. Each comment is annotated along three interpretable dimensions: Target (who is affected), Intent (the underlying motivation or goal behind the comment), and Implication (its potential social impact). Unlike prior explainability datasets such as HateXplain and HARE, which offer token-level or single-dimensional reasoning, HateMirage introduces a multi-dimensional explanation framework that captures the interplay between misinformation, harm, and social consequence. We benchmark multiple open-source language models on HateMirage using ROUGE-L F1 and Sentence-BERT similarity to assess explanation coherence. Results suggest that explanation quality may depend more on pretraining diversity and reasoning-oriented data rather than on model scale alone. By coupling misinformation reasoning with harm attribution, HateMirage establishes a new benchmark for interpretable hate detection and responsible AI research.
Techmeme(15)
- Sources: President Trump met with Coinbase CEO Brian Armstrong on March 3 before publicly admonishing banks over the GENIUS Act, echoing Coinbase's position (Jasper Goodman/Politico)
Jasper Goodman / Politico : Sources: President Trump met with Coinbase CEO Brian Armstrong on March 3 before publicly admonishing banks over the GENIUS Act, echoing Coinbase's position — President Donald Trump met privately on Tuesday with Coinbase CEO Brian Armstrong before publicly backing the company's position …
- An OpenAI spokesperson says Sam Altman misspoke in saying OpenAI was looking to deploy on NATO classified networks, and that it was for "unclassified networks" (Hyunsu Yim/Reuters)
Hyunsu Yim / Reuters : An OpenAI spokesperson says Sam Altman misspoke in saying OpenAI was looking to deploy on NATO classified networks, and that it was for “unclassified networks” — OpenAI is considering a contract to deploy its AI technology on North Atlantic Treaty Organization's (NATO) …
- Asia's smaller chip companies are joining their bigger peers in hiking prices as robust AI demand fuels capex, projected to rise 25% YoY to over $136B in 2026 (Nikkei Asia)
Nikkei Asia : Asia's smaller chip companies are joining their bigger peers in hiking prices as robust AI demand fuels capex, projected to rise 25% YoY to over $136B in 2026 — TAIPEI — Asia's smaller chip companies are joining their bigger peers in hiking prices as robust AI demand fuels record levels …
- The UK government commits an initial £40M to an AI research lab, modeled on its DARPA-inspired ARIA, seeking breakthroughs in science, healthcare, and transport (Madhumita Murgia/Financial Times)
Madhumita Murgia / Financial Times : The UK government commits an initial £40M to an AI research lab, modeled on its DARPA-inspired ARIA, seeking breakthroughs in science, healthcare, and transport — New state-backed body seeks AI breakthroughs in science, healthcare and transport — The UK is launching …
- Sea's NY-listed shares fell as much as 27% on March 3, their worst intraday drop since 2023, after reporting Q4 net income up 73% YoY to $410.9M, vs. $442M est. (Olivia Poh/Bloomberg)
Olivia Poh / Bloomberg : Sea's NY-listed shares fell as much as 27% on March 3, their worst intraday drop since 2023, after reporting Q4 net income up 73% YoY to $410.9M, vs. $442M est. — Sea Ltd. shares fell by the most in more than two years after its quarterly earnings missed analysts' estimates …
- Lockheed Martin to follow US federal ban on Anthropic, as government contracting attorneys say defense contractors are expected to comply with the DOD's order (Reuters)
Reuters : Lockheed Martin to follow US federal ban on Anthropic, as government contracting attorneys say defense contractors are expected to comply with the DOD's order — U.S. defense contractors, like Lockheed Martin (LMT.N), are expected to follow the Pentagon's order to purge Anthropic's prized AI tools …
- Security researchers successfully prompted the AI behind a Utah prescription renewal pilot to reclassify meth as an "unrestricted therapeutic", and more (Sam Sabin/Axios)
Sam Sabin / Axios : Security researchers successfully prompted the AI behind a Utah prescription renewal pilot to reclassify meth as an “unrestricted therapeutic”, and more — Security researchers used relatively simple jailbreaking techniques to trick the AI system powering Utah's new prescription refill bot.
- Dario Amodei says Anthropic has a higher retention than OpenAI, emphasizing its "mission" as a way to reinforce staff loyalty and fend off competitors' offers (The Information)
The Information : Dario Amodei says Anthropic has a higher retention than OpenAI, emphasizing its “mission” as a way to reinforce staff loyalty and fend off competitors' offers — Anthropic CEO Dario Amodei on Tuesday touted the company's ability to prevent competitors from hiring away its staff …
- TikTok won't add E2EE to DMs because it would prevent police and safety teams from reading messages if needed, saying it wants to protect young users from harm (Joe Tidy/BBC)
Joe Tidy / BBC : TikTok won't add E2EE to DMs because it would prevent police and safety teams from reading messages if needed, saying it wants to protect young users from harm — TikTok will not introduce end-to-end encryption (E2EE) - the controversial privacy feature used by nearly all its rivals - arguing it makes users less safe.
- OpenAI's red lines within its DOD agreement are built upon legal language that the NSA has redefined over decades to permit the things they appear to prohibit (Mike Masnick/Techdirt)
Mike Masnick / Techdirt : OpenAI's red lines within its DOD agreement are built upon legal language that the NSA has redefined over decades to permit the things they appear to prohibit — Within hours on Friday, the Pentagon blacklisted one AI company for refusing to drop its safety commitments on surveillance …
- KeyCare, a virtual care platform built on the Epic EHR, raised $27.4M led by HealthX Ventures, bringing its total funding to over $55M (Jessica Hagen/MobiHealthNews)
Jessica Hagen / MobiHealthNews : KeyCare, a virtual care platform built on the Epic EHR, raised $27.4M led by HealthX Ventures, bringing its total funding to over $55M — The new funding brings the company's total raise to more than $55 million. — Global Investing — KeyCare, a virtual care platform built on the Epic EHR …
- Google details Coruna, an exploit kit used to hijack iPhones via malicious websites; iVerify suggests it may have been originally built for the US government (Andy Greenberg/Wired)
Andy Greenberg / Wired : Google details Coruna, an exploit kit used to hijack iPhones via malicious websites; iVerify suggests it may have been originally built for the US government — A highly sophisticated set of iPhone hijacking techniques has likely infected tens of thousands of phones or more.
- Self-driving software startup Oxa raised a $103M Series D, with $50M coming from the UK government's National Wealth Fund; Nvidia's NVentures also invested (Tom Nugent/Sifted)
Tom Nugent / Sifted : Self-driving software startup Oxa raised a $103M Series D, with $50M coming from the UK government's National Wealth Fund; Nvidia's NVentures also invested — Oxa develops self-driving software for industrial use cases, with customers including BP, DHL and Vantec
- Sources: the White House is debating whether to allow Tencent to maintain stakes in US and Finnish video game companies; Tencent holds a 28% stake in Epic Games (Financial Times)
Financial Times : Sources: the White House is debating whether to allow Tencent to maintain stakes in US and Finnish video game companies; Tencent holds a 28% stake in Epic Games — Chinese company's investments in ‘Fortnite’ creator Epic Games and other creators have faced long-running US security review
- As experts warn about cyberattacks from Iran on the US, CISA is operating under a partial government shutdown and dealing with leadership changes (Samantha Subin/CNBC)
Samantha Subin / CNBC : As experts warn about cyberattacks from Iran on the US, CISA is operating under a partial government shutdown and dealing with leadership changes — As the fighting in the Middle East roars on, cyber experts are increasingly warning of online attacks from Iran on U.S. businesses and infrastructure.
Solidot(15)
- ChatGPT 卸载率在五角大楼交易之后飙升 295%
根据 Sensor Tower 的数据,在 OpenAI 与五角大楼达成交易之后,用户对此做出了反应,2 月 28 日当天 OpenAI AI 聊天机器人 ChatGPT 应用的卸载量在美国比前一天飙升 295%,而过去 30 天它的平均日卸载率是 9%。与此同时,拒绝五角大楼要求的 OpenAI 竞争对手 Anthropic 的 Claude 应用的下载量在 2 月 27 日和 2 月 28 日分别增长了 37% 和 51%。ChatGPT 的下载量也受到了交易的影响,在宣布与五角大楼合作前的 2 月 27 日 ChatGPT 下载量环比增长 14%,但宣布交易后的 28 日其下载量环比下降 13%,3 月 1 日下载量继续环比下降 5%。Claude 在美国免费应用排行榜也已经连续三天登上榜首,这一波热潮也导致 Claude 多次发生短暂的宕机。
- ARM Cortex X925 桌面性能赶上了 AMD 和英特尔
英国公司 Arm 设计的芯片长期以来是为低功耗和小面积优化的,但它也一直推出针对高性能应用场景的核心。2012 年 Arm 发布 64 位核心 Cortex A57 时,能媲美 AMD 和英特尔最新处理器还是遥不可及的梦想。它在 2024 年推出的高性能核心 Cortex X925 已将梦想变成了现实。英伟达超级芯片 GB10 Superchip 使用的 Arm 核心就是基于 Cortex X925。它在桌面性能上赶上了 AMD Zen 5 和英特尔的 Lion Cove。GB10 使用了 10 个 X925 核心,分成两个集群,其中之一的 X925 核心最高频率 4 GHz,另一个是 3.9 GHz。测试显示它的重排序性能优于 AMD Zen 5,L2 缓存容量赶上了英特尔处理器的 P-Cores(即性能核心)。
- 南极过去三十年损失了 1.2 万平方公里的底部冰
加州尔湾的冰川学家绘制了过去 30 年的南极洲环极底部冰线迁移图,显示它损失了逾 1.2 万平方公里的底部冰(grounded ice)。底部冰是直接与海床或基岩接触的冰,区别于漂浮在水面上的冰架或冰山,它们通常更稳定。研究人员综合分析了多颗卫星的数据,发现 77% 的海岸线未发生冰川接地线迁移,但西南极洲、南极半岛和东南极洲部分地区损失了 12,820 平方公里的底部冰。冰盖正以平均每年 442 平方公里的速度从接地线后退。变化最显著的是西南极洲的 Amundsen Sea 和 Getz 海域,冰川后退了 10-40 公里左右。Pine Island 冰川后退了 33 公里,Thwaites 冰川后退了 26 公里,Smith 冰川的后退距离达到了 42 公里。
- 小米莱卡手机起售价 1.6 万元
小米和莱卡宣布了面向高端智能手机市场的莱卡智能手机 Leitzphone,以莱卡创始人 Ernst Leitz 的名字命名。Leitzphone 配备了两个 5000 万像素镜头和一个 2 亿像素镜头,提供了类似相机的调整焦距、快门速度和曝光的旋钮,硬件配置与 Xiaomi 17 Ultra 基本相同,起售价 1999 欧元——约 1.6 万人民币。
- 人体血液中 CO2 水平也在上升
随着大气二氧化碳(CO2)浓度持续升高,人体血液中的 CO2 水平也在悄然攀升。如果这一趋势不加遏制,几十年内,一项关键血液指标可能逼近健康警戒线。相关论文发表于《Air Quality, Atmosphere & Health》杂志。 为探究大气变化对人体内部的影响,研究团队系统梳理了美国 20 余年间的大规模人口数据。结果发现,血液化学成分的悄然演变,与大气 CO2 的上升轨迹惊人同步。数据显示,血清碳酸氢盐这一与体内 CO2 水平密切挂钩的标志物的平均浓度,在 20 余年间上升了约 7%。与此同时,钙和磷的平均水平则出现了下降。这一变化恰与大气 CO2 浓度从 2000 年的 369ppm 攀升至如今超过 420ppm 的背景重合。团队表示,这暗示着人体可能正在“默默代偿”,以应对环境变化带来的内在压力。碳酸氢盐是维持人体血液酸碱平衡的“缓冲剂”。当血液中 CO2 浓度升高,身体便会自动保留更多碳酸氢盐,以中和酸性,稳定内环境。然而这种代偿并非长久之计。持续的微调,终将打破人体精密的生理平衡。模型推演显示,若当前趋势不改,50 年后人体血清碳酸氢盐平均水平或将触及当今健康范围的上限,而钙、磷浓度也可能在本世纪晚些时候跌破下限。
- Ars Technica 的 AI 记者离职
知名科技媒体 Ars Technica 上个月在报道 AI 新闻时被发现将 AI 生成的内容作为消息来源使用,Ars 联合创始人兼主编 Ken Fisher 为此发表声明公开道歉。这篇报道的合作者 Benj Edwards 是 Ars 的资深 AI 记者,他表示自己承担全部责任,另一位合作者与这起错误没有关联。他辩解说自己尝试使用基于 Claude Code 的实验性 AI 工具从原始材料中提取出可添加到大纲的结构化引用内容,但该 AI 拒绝处理,他猜测可能是文章描述的是一起骚扰事件(AI 骚扰人类),他于是将文本拷贝到 ChatGPT,没有注意到 ChatGPT 生成了文章作者的意译版本而不是原话,在引用时没有核实引用是否与原文一致。现在 Benj Edwards 在 Ars 的简历已经变成了过去时态,意味着他已经离职,但是否被解雇 Ars 没有说明, Edwards 本人拒绝置评。
- 加拿大 BC 省永久采用夏令时
加拿大不列颠哥伦比亚省(British Columbia)省长 David Eby 宣布,BC 省将永久采用夏令时,本周日 3 月 8 日将是当地居民最后一次调整时钟。Eby 表示居民已经等待太长时间了,这次之后将不再需要去调整时钟。研究已经发现,一年两次调整时间并不利于公众健康和安全。BC 省居民将有八个月的时间为 2026 年 11 月 1 日做准备,届时时钟原本应拨慢一小时,但将保持不变。BC 省的新时区将被称为“太平洋时间”。
- 日本计划禁止飞机乘客使用充电宝
在去年发生多起与移动电源相关的飞机起火事故后,日本计划从 4 月中旬起禁止乘客在飞机上使用移动电源/充电宝。日本国土交通省发布了《民用航空条例》修订版征求意见稿。在日本,移动电源被分类为备用电池,被禁止托运。随身行李中的移动电池则禁止超过 160 瓦时,超过 100 瓦时只能携带两块,低于 100 瓦时的不受限制。新修订的规定限制乘客最多能携带包括充电宝在内的两块电池,禁止在机上充电,建议乘客不要使用充电宝。日本航空公司则可能一步到位,直接禁止乘客使用充电宝。
- 左撇子的演化优势
人类中的大多数是右撇子,但还有 10.6% 的人是左撇子。演化心理学认为,右撇子和左撇子都有各自的演化上的优势。右撇子在部分合作行为上具有优势,如通过模仿进行学习,由于教师多数是右撇子,因此右撇子学习者可能更容易掌握新技能。左撇子被认为在竞争情景如搏击上有优势,由于左撇子比右撇子罕见得多,因此左撇子在攻击中能表现出更出其不意的效果。发表在《Scientific Reports》期刊上的一项研究测试了左撇子和竞争意识之间的关系。对 50 名左撇子志愿者和 483 名右撇子志愿者的分析发现:惯用右手程度越高,焦虑驱动的竞争回避倾向越强;惯用左手程度越高,自我发展型竞争倾向越强。
- NIST 限制外国科学家进入其实验室
过去几周在美国国家标准与技术研究院(NIST)工作的数百名外国科学家被限制进入实验室,除非有联邦雇员陪同,否则不得在晚上和周末进入实验室。部分国家的科学家最早将在本月底失去访问权限。拟议中的规则尚无书面版本,仅通过会议传达。最新变化是基于 NIST 在 2025 年更新的研究安全规则,它将中国、俄罗斯、伊朗、朝鲜、古巴、委内瑞拉和叙利亚的科学家视为“高风险”人群,中国等国的研究人员已被告知,他们的实验室访问权限将于 3 月 31 日前接受审查,如果在 NIST 工作逾 3 年或从事量子技术或 AI 等敏感项目而构成“高风险”,其访问权限将被终止。低风险国家的研究人员也面临从 9 月或 12 月起失去访问权限。NIST 研究人员不从事机密研究,NIST 前主任 Patrick Gallagher 表示看不出这么做会带来什么安全上的好处。
- 亚马逊 AWS 中东数据中心遭遇火灾和断电
亚马逊 AWS 披露其位于中东数据中心的一处遭遇断电一处遭遇“物体”撞击后起火。它没有披露是什么东西撞击了数据中心设施。AWS 表示它位于阿联酋的一个数据中心于 7:30 a.m. ET 遭遇撞击,撞击产生了火花和火灾,消防部门在灭火过程中切断了数据中心和发电机的电源。
- 为何女性的疼痛持续时间更长
医生通常认为免疫系统会通过引起炎症加剧疼痛,而炎症通常表现为红肿。最新研究显示,免疫细胞在帮助缓解疼痛方面也可能至关重要,男女之间的免疫细胞功能差异可能会影响疼痛消退的速度。研究人员调查了名为 IL-10(interleukin-10)的分子,测量了小鼠皮肤损伤后和交通事故急诊伤者体内的 IL-10 水平,发现 IL-10 的作用不仅是缓解炎症,还能直接与疼痛感受神经细胞通信将其关闭。也就是 IL-10 有助于消除疼痛。IL-10 由免疫系统的一种白细胞单核细胞(Monocyte)产生,这种细胞会在血液中循环转移到受伤组织。在男性体内,单核细胞更容易产生 IL-10,而在女性体内不太明显。原因是睾酮会影响单核细胞产生的 IL-10 的数量,而男性体内有更高的睾酮水平。
- 小鼠研究发现器官同步衰老但存在性别差异
研究人员构建了迄今最详尽的图谱,展示了衰老如何影响21种哺乳动物组织中的数千种细胞亚型。他们通过分析不同年龄段小鼠的近 700 万个单细胞,确定了随时间推移最易受损的细胞,以及促使其衰老的因素。研究人员对 32 只小鼠21个器官中提取的数百万个单细胞进行分析。这些小鼠处于 3 个年龄段:1 个月(年轻成年小鼠)、5 个月(中年小鼠)、21 个月(老年小鼠)。研究人员识别出了超过 1800 种不同的细胞亚型,其中包括许多此前未被完整描述过的罕见类型。随后研究团队追踪了各年龄阶段小鼠不同类型细胞的数量变化情况。数十年来,科学家们一直认为衰老主要改变的是细胞功能,而非细胞数量。而研究团队的分析结果对这一观点提出了挑战。他们发现,约 1/4 的细胞类型在数量上随时间推移发生了显著变化,比如某些肌肉细胞和肾细胞的数量大幅减少,而免疫细胞数量则大幅增加。这些变化在不同器官的细胞中具有同步性,相似的细胞状态在不同器官中几乎同时出现和消失。这种模式表明,血液中循环中的共同信号可能有助协调全身的衰老过程。大约 40% 与衰老相关的改变因性别而异。例如女性衰老过程中,表现出更广泛的免疫激活情况。
- 2026 年 2 月 Steam 统计显示简体中文用户占逾半数份额
Valve 公布了 2026 年 2 月的 Steam 硬件和软件调查。数据显示了一个异常:简体中文用户出现爆炸式增长,单月增长 30.74% 至 54.60%,英文用户减少 14.74% 至 22.27%。另一个异常是 Windows 11 下降 10.43% 占 56.28%,64 位 Windows 10 增长 12.46% 至 40.25%。异常现象的一个解释也许是 2 月中旬是大部分简体中文用户的春节假期。其它数据包括:Windows 操作系统份额增长 1.99% 至 96.61%,Linux 下降 1.15% 占 2.23%,macOS 下降 0.85% 占 1.16%。
- 摩托罗拉手机宣布与 GrapheneOS 合作
联想旗下的摩托罗拉手机宣布与 Android 安全加固社区发行版项目 GrapheneOS 展开合作。GrapheneOS 此前主要支持 Google 的 Pixel 系列手机,但摩托罗拉哪些型号的手机会支持 GrapheneOS 官方新闻稿没有给出信息,只是表示未来会公布。摩托罗拉同时推出了企业级分析平台 Moto Analytics,Moto Secure 支持私密图像数据功能,启用后会自动从设备上所有新拍摄的图像中移除敏感元数据如位置。