DIGEST · 2026-01-19

OrangeBot.AI Digest — 2026-01-19

53 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Nearly a third of social media research has undisclosed ties to industry (www.science.org)
  2. Letter from a Birmingham Jail (1963) (www.africa.upenn.edu)
  3. What came first: the CNAME or the A record? (blog.cloudflare.com)
  4. Apple testing new App Store design that blurs the line between ads and results (9to5mac.com)
  5. American importers and consumers bear the cost of 2025 tariffs: analysis (www.kielinstitut.de)
  6. GLM-4.7-Flash (huggingface.co)
  7. Nvidia contacted Anna's Archive to access books (torrentfreak.com)
  8. Ask HN: COBOL devs, how are AI coding affecting your work?
  9. Article by article, how Big Tech shaped the EU's roll-back of digital rights (corporateeurope.org)
  10. Amazon is ending all inventory commingling as of March 31, 2026 (twitter.com)
  11. Wikipedia: WikiProject AI Cleanup (en.wikipedia.org)
  12. Radboud University selects Fairphone as standard smartphone for employees (www.ru.nl)
  13. A decentralized peer-to-peer messaging application that operates over Bluetooth (bitchat.free)
  14. Show HN: Pdfwithlove – PDF tools that run 100% locally (no uploads, no back end) (pdfwithlove.netlify.app)
  15. The Code-Only Agent (rijnard.com)

GitHub Trending(8)

  1. OpenBMB / VoxCPM

    VoxCPM: Tokenizer-Free TTS for Context-Aware Speech Generation and True-to-Life Voice Cloning

  2. google / langextract

    A Python library for extracting structured information from unstructured text using LLMs with precise source grounding and interactive visualization.

  3. iOfficeAI / AionUi

    Free, local, open-source Cowork for Gemini CLI, Claude Code, Codex, Opencode, Qwen Code, Goose Cli, Auggie, and more | 🌟 Star if you like it!

  4. czlonkowski / n8n-mcp

    A MCP for Claude Desktop / Claude Code / Windsurf / Cursor to build n8n workflows for you

  5. nautechsystems / nautilus_trader

    A high-performance algorithmic trading platform and event-driven backtester

  6. ahujasid / blender-mcp
  7. yichuan-w / LEANN

    RAG on Everything with LEANN. Enjoy 97% storage savings while running a fast, accurate, and 100% private RAG application on your personal device.

  8. DavidXanatos / TaskExplorer

    Power full Task Manager

Hugging Face(15)

  1. Your Group-Relative Advantage Is Biased

    Reinforcement Learning from Verifier Rewards (RLVR) has emerged as a widely used approach for post-training large language models on reasoning tasks, with group-based methods such as GRPO and its variants gaining broad adoption. These methods rely on group-relative advantage estimation to avoid learned critics, yet its theoretical properties remain poorly understood. In this work, we uncover a fundamental issue of group-based RL: the group-relative advantage estimator is inherently biased relative to the true (expected) advantage. We provide the first theoretical analysis showing that it systematically underestimates advantages for hard prompts and overestimates them for easy prompts, leading to imbalanced exploration and exploitation. To address this issue, we propose History-Aware Adaptive Difficulty Weighting (HA-DW), an adaptive reweighting scheme that adjusts advantage estimates based on an evolving difficulty anchor and training dynamics. Both theoretical analysis and experiments on five mathematical reasoning benchmarks demonstrate that HA-DW consistently improves performance when integrated into GRPO and its variants. Our results suggest that correcting biased advantage estimation is critical for robust and efficient RLVR training.

  2. The Poisoned Apple Effect: Strategic Manipulation of Mediated Markets via Technology Expansion of AI Agents

    The integration of AI agents into economic markets fundamentally alters the landscape of strategic interaction. We investigate the economic implications of expanding the set of available technologies in three canonical game-theoretic settings: bargaining (resource division), negotiation (asymmetric information trade), and persuasion (strategic information transmission). We find that simply increasing the choice of AI delegates can drastically shift equilibrium payoffs and regulatory outcomes, often creating incentives for regulators to proactively develop and release technologies. Conversely, we identify a strategic phenomenon termed the "Poisoned Apple" effect: an agent may release a new technology, which neither they nor their opponent ultimately uses, solely to manipulate the regulator's choice of market design in their favor. This strategic release improves the releaser's welfare at the expense of their opponent and the regulator's fairness objectives. Our findings demonstrate that static regulatory frameworks are vulnerable to manipulation via technology expansion, necessitating dynamic market designs that adapt to the evolving landscape of AI capabilities.

  3. Unlocking Implicit Experience: Synthesizing Tool-Use Trajectories from Text

    Enabling Large Language Models (LLMs) to effectively utilize tools in multi-turn interactions is essential for building capable autonomous agents. However, acquiring diverse and realistic multi-turn tool-use data remains a significant challenge. In this work, we propose a novel text-based paradigm. We observe that textual corpora naturally contain rich, multi-step problem-solving experiences, which can serve as an untapped, scalable, and authentic data source for multi-turn tool-use tasks. Based on this insight, we introduce GEM, a data synthesis pipeline that enables the generation and extraction of multi-turn tool-use trajectories from text corpora through a four-stage process: relevance filtering, workflow & tool extraction, trajectory grounding, and complexity refinement. To reduce the computational cost, we further train a specialized Trajectory Synthesizer via supervised fine-tuning. This model distills the complex generation pipeline into an efficient, end-to-end trajectory generator. Experiments demonstrate that our GEM-32B achieve a 16.5% improvement on the BFCL V3 Multi-turn benchmark. Our models partially surpass the performance of models trained on τ - bench (Airline and Retail) in-domain data, highlighting the superior generalization capability derived from our text-based synthesis paradigm. Notably, our Trajectory Synthesizer matches the quality of the full pipeline while significantly reducing inference latency and costs.

  4. RubricHub: A Comprehensive and Highly Discriminative Rubric Dataset via Automated Coarse-to-Fine Generation

    Reinforcement Learning with Verifiable Rewards (RLVR) has driven substantial progress in reasoning-intensive domains like mathematics. However, optimizing open-ended generation remains challenging due to the lack of ground truth. While rubric-based evaluation offers a structured proxy for verification, existing methods suffer from scalability bottlenecks and coarse criteria, resulting in a supervision ceiling effect. To address this, we propose an automated Coarse-to-Fine Rubric Generation framework. By synergizing principle-guided synthesis, multi-model aggregation, and difficulty evolution, our approach produces comprehensive and highly discriminative criteria capable of capturing the subtle nuances. Based on this framework, we introduce RubricHub, a large-scale (sim110k) and multi-domain dataset. We validate its utility through a two-stage post-training pipeline comprising Rubric-based Rejection Sampling Fine-Tuning (RuFT) and Reinforcement Learning (RuRL). Experimental results demonstrate that RubricHub unlocks significant performance gains: our post-trained Qwen3-14B achieves state-of-the-art (SOTA) results on HealthBench (69.3), surpassing proprietary frontier models such as GPT-5. The code and data will be released soon.

  5. When Personalization Misleads: Understanding and Mitigating Hallucinations in Personalized LLMs

    Personalized large language models (LLMs) adapt model behavior to individual users to enhance user satisfaction, yet personalization can inadvertently distort factual reasoning. We show that when personalized LLMs face factual queries, there exists a phenomenon where the model generates answers aligned with a user's prior history rather than the objective truth, resulting in personalization-induced hallucinations that degrade factual reliability and may propagate incorrect beliefs, due to representational entanglement between personalization and factual representations. To address this issue, we propose Factuality-Preserving Personalized Steering (FPPS), a lightweight inference-time approach that mitigates personalization-induced factual distortions while preserving personalized behavior. We further introduce PFQABench, the first benchmark designed to jointly evaluate factual and personalized question answering under personalization. Experiments across multiple LLM backbones and personalization methods show that FPPS substantially improves factual accuracy while maintaining personalized performance.

  6. ACoT-VLA: Action Chain-of-Thought for Vision-Language-Action Models

    Vision-Language-Action (VLA) models have emerged as essential generalist robot policies for diverse manipulation tasks, conventionally relying on directly translating multimodal inputs into actions via Vision-Language Model (VLM) embeddings. Recent advancements have introduced explicit intermediary reasoning, such as sub-task prediction (language) or goal image synthesis (vision), to guide action generation. However, these intermediate reasoning are often indirect and inherently limited in their capacity to convey the full, granular information required for precise action execution. Instead, we posit that the most effective form of reasoning is one that deliberates directly in the action space. We introduce Action Chain-of-Thought (ACoT), a paradigm where the reasoning process itself is formulated as a structured sequence of coarse action intents that guide the final policy. In this paper, we propose ACoT-VLA, a novel architecture that materializes the ACoT paradigm. Specifically, we introduce two complementary components: an Explicit Action Reasoner (EAR) and Implicit Action Reasoner (IAR). The former proposes coarse reference trajectories as explicit action-level reasoning steps, while the latter extracts latent action priors from internal representations of multimodal input, co-forming an ACoT that conditions the downstream action head to enable grounded policy learning. Extensive experiments in real-world and simulation environments demonstrate the superiority of our proposed method, which achieves 98.5%, 84.1%, and 47.4% on LIBERO, LIBERO-Plus and VLABench, respectively.

  7. BAPO: Boundary-Aware Policy Optimization for Reliable Agentic Search

    RL-based agentic search enables LLMs to solve complex questions via dynamic planning and external search. While this approach significantly enhances accuracy with agent policies optimized via large-scale reinforcement learning, we identify a critical gap in reliability: these agents fail to recognize their reasoning boundaries and rarely admit ``I DON'T KNOW'' (IDK) even when evidence is insufficient or reasoning reaches its limit. The lack of reliability often leads to plausible but unreliable answers, introducing significant risks in many real-world scenarios. To this end, we propose Boundary-Aware Policy Optimization (BAPO), a novel RL framework designed to cultivate reliable boundary awareness without compromising accuracy. BAPO introduces two key components: (i) a group-based boundary-aware reward that encourages an IDK response only when the reasoning reaches its limit, and (ii) an adaptive reward modulator that strategically suspends this reward during early exploration, preventing the model from exploiting IDK as a shortcut. Extensive experiments on four benchmarks demonstrate that BAPO substantially enhances the overall reliability of agentic search.

  8. Entropy Sentinel: Continuous LLM Accuracy Monitoring from Decoding Entropy Traces in STEM

    Deploying LLMs raises two coupled challenges: (1) monitoring - estimating where a model underperforms as traffic and domains drift - and (2) improvement - prioritizing data acquisition to close the largest performance gaps. We test whether an inference-time signal can estimate slice-level accuracy under domain shift. For each response, we compute an output-entropy profile from final-layer next-token probabilities (from top-k logprobs) and summarize it with eleven statistics. A lightweight classifier predicts instance correctness, and averaging predicted probabilities yields a domain-level accuracy estimate. We evaluate on ten STEM reasoning benchmarks with exhaustive train/test compositions (k in {1,2,3,4}; all "10 choose k" combinations), across nine LLMs from six families (3B-20B). Estimates often track held-out benchmark accuracy, and several models show near-monotonic ordering of domains. Output-entropy profiles are thus an accessible signal for scalable monitoring and for targeting data acquisition.

  9. FrankenMotion: Part-level Human Motion Generation and Composition

    Human motion generation from text prompts has made remarkable progress in recent years. However, existing methods primarily rely on either sequence-level or action-level descriptions due to the absence of fine-grained, part-level motion annotations. This limits their controllability over individual body parts. In this work, we construct a high-quality motion dataset with atomic, temporally-aware part-level text annotations, leveraging the reasoning capabilities of large language models (LLMs). Unlike prior datasets that either provide synchronized part captions with fixed time segments or rely solely on global sequence labels, our dataset captures asynchronous and semantically distinct part movements at fine temporal resolution. Based on this dataset, we introduce a diffusion-based part-aware motion generation framework, namely FrankenMotion, where each body part is guided by its own temporally-structured textual prompt. This is, to our knowledge, the first work to provide atomic, temporally-aware part-level motion annotations and have a model that allows motion generation with both spatial (body part) and temporal (atomic action) control. Experiments demonstrate that FrankenMotion outperforms all previous baseline models adapted and retrained for our setting, and our model can compose motions unseen during training. Our code and dataset will be publicly available upon publication.

  10. ProFit: Leveraging High-Value Signals in SFT via Probability-Guided Token Selection

    Supervised fine-tuning (SFT) is a fundamental post-training strategy to align Large Language Models (LLMs) with human intent. However, traditional SFT often ignores the one-to-many nature of language by forcing alignment with a single reference answer, leading to the model overfitting to non-core expressions. Although our empirical analysis suggests that introducing multiple reference answers can mitigate this issue, the prohibitive data and computational costs necessitate a strategic shift: prioritizing the mitigation of single-reference overfitting over the costly pursuit of answer diversity. To achieve this, we reveal the intrinsic connection between token probability and semantic importance: high-probability tokens carry the core logical framework, while low-probability tokens are mostly replaceable expressions. Based on this insight, we propose ProFit, which selectively masks low-probability tokens to prevent surface-level overfitting. Extensive experiments confirm that ProFit consistently outperforms traditional SFT baselines on general reasoning and mathematical benchmarks.

  11. Future Optical Flow Prediction Improves Robot Control & Video Generation

    Future motion representations, such as optical flow, offer immense value for control and generative tasks. However, forecasting generalizable spatially dense motion representations remains a key challenge, and learning such forecasting from noisy, real-world data remains relatively unexplored. We introduce FOFPred, a novel language-conditioned optical flow forecasting model featuring a unified Vision-Language Model (VLM) and Diffusion architecture. This unique combination enables strong multimodal reasoning with pixel-level generative fidelity for future motion prediction. Our model is trained on web-scale human activity data-a highly scalable but unstructured source. To extract meaningful signals from this noisy video-caption data, we employ crucial data preprocessing techniques and our unified architecture with strong image pretraining. The resulting trained model is then extended to tackle two distinct downstream tasks in control and generation. Evaluations across robotic manipulation and video generation under language-driven settings establish the cross-domain versatility of FOFPred, confirming the value of a unified VLM-Diffusion architecture and scalable learning from diverse web data for future optical flow prediction.

  12. ShapeR: Robust Conditional 3D Shape Generation from Casual Captures

    Recent advances in 3D shape generation have achieved impressive results, but most existing methods rely on clean, unoccluded, and well-segmented inputs. Such conditions are rarely met in real-world scenarios. We present ShapeR, a novel approach for conditional 3D object shape generation from casually captured sequences. Given an image sequence, we leverage off-the-shelf visual-inertial SLAM, 3D detection algorithms, and vision-language models to extract, for each object, a set of sparse SLAM points, posed multi-view images, and machine-generated captions. A rectified flow transformer trained to effectively condition on these modalities then generates high-fidelity metric 3D shapes. To ensure robustness to the challenges of casually captured data, we employ a range of techniques including on-the-fly compositional augmentations, a curriculum training scheme spanning object- and scene-level datasets, and strategies to handle background clutter. Additionally, we introduce a new evaluation benchmark comprising 178 in-the-wild objects across 7 real-world scenes with geometry annotations. Experiments show that ShapeR significantly outperforms existing approaches in this challenging setting, achieving an improvement of 2.7x in Chamfer distance compared to state of the art.

  13. Reasoning Models Generate Societies of Thought

    Large language models have achieved remarkable capabilities across domains, yet mechanisms underlying sophisticated reasoning remain elusive. Recent reasoning models outperform comparable instruction-tuned models on complex cognitive tasks, attributed to extended computation through longer chains of thought. Here we show that enhanced reasoning emerges not from extended computation alone, but from simulating multi-agent-like interactions -- a society of thought -- which enables diversification and debate among internal cognitive perspectives characterized by distinct personality traits and domain expertise. Through quantitative analysis and mechanistic interpretability methods applied to reasoning traces, we find that reasoning models like DeepSeek-R1 and QwQ-32B exhibit much greater perspective diversity than instruction-tuned models, activating broader conflict between heterogeneous personality- and expertise-related features during reasoning. This multi-agent structure manifests in conversational behaviors, including question-answering, perspective shifts, and the reconciliation of conflicting views, and in socio-emotional roles that characterize sharp back-and-forth conversations, together accounting for the accuracy advantage in reasoning tasks. Controlled reinforcement learning experiments reveal that base models increase conversational behaviors when rewarded solely for reasoning accuracy, and fine-tuning models with conversational scaffolding accelerates reasoning improvement over base models. These findings indicate that the social organization of thought enables effective exploration of solution spaces. We suggest that reasoning models establish a computational parallel to collective intelligence in human groups, where diversity enables superior problem-solving when systematically structured, which suggests new opportunities for agent organization to harness the wisdom of crowds.

  14. PersonalAlign: Hierarchical Implicit Intent Alignment for Personalized GUI Agent with Long-Term User-Centric Records

    While GUI agents have shown strong performance under explicit and completion instructions, real-world deployment requires aligning with users' more complex implicit intents. In this work, we highlight Hierarchical Implicit Intent Alignment for Personalized GUI Agent (PersonalAlign), a new agent task that requires agents to leverage long-term user records as persistent context to resolve omitted preferences in vague instructions and anticipate latent routines by user state for proactive assistance. To facilitate this study, we introduce AndroidIntent, a benchmark designed to evaluate agents' ability in resolving vague instructions and providing proactive suggestions through reasoning over long-term user records. We annotated 775 user-specific preferences and 215 routines from 20k long-term records across different users for evaluation. Furthermore, we introduce Hierarchical Intent Memory Agent (HIM-Agent), which maintains a continuously updating personal memory and hierarchically organizes user preferences and routines for personalization. Finally, we evaluate a range of GUI agents on AndroidIntent, including GPT-5, Qwen3-VL, and UI-TARS, further results show that HIM-Agent significantly improves both execution and proactive performance by 15.7% and 7.3%.

  15. PhysRVG: Physics-Aware Unified Reinforcement Learning for Video Generative Models

    Physical principles are fundamental to realistic visual simulation, but remain a significant oversight in transformer-based video generation. This gap highlights a critical limitation in rendering rigid body motion, a core tenet of classical mechanics. While computer graphics and physics-based simulators can easily model such collisions using Newton formulas, modern pretrain-finetune paradigms discard the concept of object rigidity during pixel-level global denoising. Even perfectly correct mathematical constraints are treated as suboptimal solutions (i.e., conditions) during model optimization in post-training, fundamentally limiting the physical realism of generated videos. Motivated by these considerations, we introduce, for the first time, a physics-aware reinforcement learning paradigm for video generation models that enforces physical collision rules directly in high-dimensional spaces, ensuring the physics knowledge is strictly applied rather than treated as conditions. Subsequently, we extend this paradigm to a unified framework, termed Mimicry-Discovery Cycle (MDcycle), which allows substantial fine-tuning while fully preserving the model's ability to leverage physics-grounded feedback. To validate our approach, we construct new benchmark PhysRVGBench and perform extensive qualitative and quantitative experiments to thoroughly assess its effectiveness.

Solidot(15)

  1. 近三分之一的社媒研究有着未披露的利益关联

    社媒研究员通常需要与平台进行合作,但双方的利益关系很多时候并没有公开。近三分之一的社媒研究有着未披露的利益关联,部分研究员接受了社媒的资助,部分曾与社媒行业的员工合作发表过研究。研究人员认为,此类关联可能会扭曲研究结果。研究人员分析了《科学》、《自然》、PNAS 及其子刊如《Science Advances》和《Nature Communications》上的 295 篇社媒论文,它们的总引用次数有 5 万次,被逾 1.5 万篇新闻报道引用。其中五分之一的署名作者承认获得社媒企业的资助或有过合作。研究人员之后使用 OpenAlex 分析了署名作者和社媒企业之间的关联,发现一半作者存在关联,也就是有 30% 的作者没有披露潜在利益冲突。

  2. 中国开源 AI 模型占全球份额的 15%

    根据 AI 工具平台 OpenRouter 和风险投资公司 a16z 的分析,2025 年 11 月中国企业开发的生成式 AI 占到了全球市场份额约 15%,相比 1 年前的 1% 大幅提升。性能测试显示,去年 12 月发布的 DeepSeek 模型在 92 个模型中排在第 9 位。作为开源模型排名第 1,其次是阿里巴巴的 Qwen(千问),它们在性能方面超过了 Google 和 OpenAI 的开源模型。日本企业开发 AI 时也在使用中国的 DeepSeek 和 Qwen。

  3. 数据中心将在 2026 年消耗内存产量的七成

    根据最新报告,数据中心在 2026 年将消耗内存产量的最多七成。内存需求的指数级增长几乎肯定会冲击到汽车、电视和消费电子产品等众多行业。尽管汽车和消费电子产品等使用的是较老的内存类型,但内存制造商已经缩减或完全停产了旧内存芯片。报告称,2028 年的内存产能都售罄了,更别提今年了。几乎所有电子产品都需要内存,内存价格的暴涨将会导致本已经利润微薄的消费电子产品厂商将大部分上涨的成本转嫁给消费者——假如它们还有内存可用的话。IDC 更新了 2026 年的预测,预计智能手机销量将下滑 5%,PC 销量将下滑 9%——几个月后这些预测可能还会进一步调整。分析师表示这是有史以来内存行业最疯狂的时期。

  4. 伊朗对互联网的封锁进入第 12 天

    根据 Cloudflare Rader 和 Netblocks 的监测,伊朗仍然处于基本断网状态,对互联网的封锁已进入第 12 天。在这此前,伊朗的流量有关短暂的恢复,但很快又时断时续,显示伊朗政府正在测试一个信息严格过滤的国内互联网。根据官方的数据,此前的大规模抗议导致了逾五千人死亡,逾两万人被捕。非官方数据认为死亡人数可能超过万人。

  5. 保时捷 2025 年在欧洲卖出的电车比油车多

    保时捷上周宣布,2025 年该公司在欧洲销售的汽车中电动版本销量超过了燃油版本。保时捷在欧洲销售的汽车插电版本占到了 57.9%。该公司最畅销的车型是 Macan,它有电动版本和燃油版本,在全世界的总销量为 84,328 辆,其中电动版本为 45,367 辆,占到了 53.8%。即使在美国市场,Macan 的电动版本占到总销量的三分之一。美国是唯一一个电动汽车销量下降的主要汽车市场,去年的电动汽车销量占到了总销量的大约 10%。

  6. 微软释出紧急更新修复无法关机的 Bug

    微软在 2026 年释出的首个 Windows 11 更新就被发现存在了多个严重问题,以至于它被迫释出紧急更新修复问题。紧急更新修复了多个 bug,包括 Windows 11 23H2 PC 无法正常关机;远程桌面 Remote Desktop 的登陆问题。微软还透露了其它问题,Outlook Classic 使用 POP 帐户时崩溃,该 bug 尚未修复;Windows 随机出现黑屏问题,桌面会冻结一两秒,屏幕变黑,然后恢复正常,该问题可能是更新本身或 GPU 驱动的兼容性问题导致的。

  7. 2025 年出生人口低于 800 万

    国家统计局公布了最新的人口统计数据。2025 年末全国人口 140489 万人,全年出生人口 792 万人、死亡人口 1131 万人,人口总量同比减少 339 万人。人口出生率 5.63‰,死亡率 8.04‰,自然增长率为 -2.41‰。从性别构成看,男性人口 71685 万人,女性人口 68804 万人,总人口性别比为 104.19。从年龄构成看,16—59 岁人口 85136 万人,占全国人口的比重为 60.6%;60 岁及以上人口 32338 万人,占全国人口的 23.0%,其中 65岁及以上人口 22365 万人,占全国人口的 15.9%。从城乡构成看,城镇常住人口 95380 万人,比上年末增加 1030 万人;乡村常住人口 45109 万人,减少 1369 万人;城镇人口占全国人口的比重为 67.89%。从受教育程度看,16—59 岁人口平均受教育年限达到 11.3 年 ,比上年提高 0.1年。

  8. 美国出生率下降冲击大学

    美国出生率下降对名校影响不大,但不那么知名的大学面临招不到足够的学生而不得不削减开支,解雇教职工,甚至关闭。自 2020 年以来,美国有逾 40 所大学宣布了关闭计划。Huron Consulting Group 预测,未来十年可能有约 400 所高校消失,约 60 万名学生受到影响,180 亿美元捐赠基金将重新分配。预计 2025 年后全美大学生生源数量将进一步减少。高校面临每届新生人数可能比上一届少的风险。财政压力将持续增加。2007 年美国出生人数创历史新高,之后出生率开始下降。面临招生困难的高校通常招国际学生去填补空缺,但特朗普去年重创了这一策略,他颁布了旅行禁令,放慢了签证申请流程,威胁驱逐外国学生维权者。结果是去年秋季全美国际学生入学人数减少了近 5000 人。

  9. OpenAI 未来几周测试为 ChatGPT 加入广告

    OpenAI 周五宣布未来几周测试为 ChatGPT 引入广告。这家估值 5000 亿美元的初创公司正在寻找新的收入来源以资助其持续扩张,以及与 Google 和 Anthropic 等竞争对手展开竞争。广告将先在美国进行测试,将出现在免费版本和 8 美元月费 ChatGPT Go 用户的答案的底部,仅在与用户所查询问题相关时展示。Pro、Business 和 Enterprise 订阅服务都不会有广告。OpenAI 预测 2026 年的广告收入将达到数十亿美元,未来几年会更高。

  10. 安娜的档案面临美联邦法院的永久禁令

    安娜的档案(Anna’s Archive)过去两周先后失去了 .org 和 .se 域名。这一切发生在它抓取和公开音乐流媒体服务 Spotify 的存档之后,但没有证据表明 Spotify 与此相关。安娜的档案的麻烦还没有结束。WorldCat 私有数据库拥有者 OCLC 获得了对安娜的档案的默认判决和永久禁令。安娜的档案在两年前抓取和公开了 WorldCat 数据库。OCLC 预计会要求安娜的档案的托管服务商删除 WorldCat 数据库。暂时不清楚安娜的档案的域名问题是否与此相关。

  11. 太阳能满足美国 2025 年六成新增电力需求

    能源智库 Ember 的分析显示,美国 2025 年电力需求激增了 135TWh,而太阳能发电量同期创记录增长 83TWh,太阳能满足了美国 61% 的新增用电需求。这凸显了太阳能已成为美国电力的核心组成部分。德克萨斯州、中西部和中大西洋地区去年太阳能发电量增长最快,它们也是电力需求增长最快的地区。太阳能满足了德州和中西部 81% 的新增电力需求,中大西洋则是 33%。

  12. 腾讯向逾 30 个 GitHub 微信相关项目发出 DMCA 通知

    腾讯的微信以体积随时间不断膨胀著称,这促使很多开发者对微信进行了分析,其中一部分人将他们的分析方法或相关清理工具发布在代码托管平台 GitHub 上。但本月初,腾讯法务向超过 30 个 GitHub 项目发出了 DMCA 通知,导致这些项目被迫下架。腾讯法务指控开发者们违反了 DMCA 绕过技术保护措施的条款、违反了微信禁止逆向工程条款、威胁用户隐私和安全,以及侵犯知识产权。

  13. 微软最新安全更新导致部分 Windows 11 PC 无法正常关机

    根据微软官方公告,它的最新安全更新导致部分 Windows 11 23H2 PC 无法正常关机。受影响的 PC 无法进入关机或休眠状态,而是一直处于唤醒状态,顽固的抵抗关机指令。该 bug 与 Secure Launch 相关。Secure Launch 是基于虚拟化的保护功能,旨在确保启动过程中只加载受信任组件。在启用了 Secure Launch 系统上,安装了最新安全更新的 PC 无法执行关机、重启或休眠等操作。微软表示权宜之计是命令行下输入“shutdown /s /t 0”强制关机。目前没有修复方法。

  14. 伊朗恢复互联网访问

    根据 Cloudflare Rader 的监测,伊朗逐步恢复了互联网访问,总断网时间超过了 200 小时。伊朗半官方 Fars 新闻通讯社周六表示,伊朗将逐步解除上月底因经济危机引发抗议活动后实施的互联网和通信限制。伊朗首先恢复了短信服务,然后是恢复国内互联网和国内应用的访问,最后是国际互联网的访问。更新:Netblocks 报告伊朗尚未开放国际互联网。

  15. Monster Hunter Wilds 性能问题与 DLC 检查相关

    日本游戏公司 Capcom 的《怪物猎人》系列新作《Monster Hunter Wilds》自去年初发布起玩家就抱怨性能优化问题。现在玩家在测试后认为优化问题与后台持续检查 DLC 所有权相关。《Monster Hunter Wilds》共有 190 个 DLC,绝大部分都是饰物服装之类虚拟物品。玩家使用了一个禁止检查 DLC 的 Mod,在中端计算机硬件上对比了禁止该 Mod 和启用该 Mod 的帧数,发现禁止 Mod 时帧数仅为 26 FPS,启用后达到 46 FPS。玩家测试发现,用户账号购买/注册的 DLC 越多,性能下降的幅度越小。