WEEK · 2026-W07

Weekly Digest — 2026-W07

131 unique stories (2026-02-092026-02-15), aggregated across 8 sources.

Hacker News(42)

  1. Another GitHub outage in the same day (www.githubstatus.com)
  2. Testing Ads in ChatGPT (openai.com)
  3. Irish man with valid US work permit held in ICE detention for five months (www.theguardian.com)
  4. Converting a $3.88 analog clock from Walmart into a ESP8266-based Wi-Fi clock (github.com)
  5. Why is the sky blue? (explainers.blog)
  6. GitHub Is Down (github.com)
  7. Google handed ICE student journalist's bank and credit card numbers (theintercept.com)
  8. The Singularity will occur on a Tuesday (campedersen.com)
  9. Ex-GitHub CEO launches a new developer platform for AI agents (entire.io)
  10. Parse, Don't Validate (2019) (lexi-lambda.github.io)
  11. I started programming when I was 7. I'm 50 now and the thing I loved has changed (www.jamesdrandall.com)
  12. Europe's $24T Breakup with Visa and Mastercard Has Begun (europeanbusinessmagazine.com)

GitHub Trending(22)

  1. KeygraphHQ / shannon

    Fully autonomous AI hacker to find actual exploits in your web apps. Shannon has achieved a 96.15% success rate on the hint-free, source-aware XBOW Benchmark.

  2. virattt / dexter

    An autonomous agent for deep financial research

  3. pydantic / monty

    A minimal, secure Python interpreter written in Rust for use by AI

  4. hsliuping / TradingAgents-CN

    基于多智能体LLM的中文金融交易框架 - TradingAgents中文增强版

  5. iOfficeAI / AionUi

    Free, local, open-source 24/7 Cowork and OpenClaw for Gemini CLI, Claude Code, Codex, OpenCode, Qwen Code, Goose CLI, Auggie, and more | 🌟 Star if you like it!

  6. public-apis / public-apis

    A collective list of free APIs

  7. google / langextract

    A Python library for extracting structured information from unstructured text using LLMs with precise source grounding and interactive visualization.

  8. github / gh-aw

    GitHub Agentic Workflows

  9. EveryInc / compound-engineering-plugin

    Official Claude Code compound engineering plugin

  10. microsoft / PowerToys

    Microsoft PowerToys is a collection of utilities that supercharge productivity and customization on Windows

  11. ChromeDevTools / chrome-devtools-mcp

    Chrome DevTools for coding agents

  12. patchy631 / ai-engineering-hub

    In-depth tutorials on LLMs, RAGs and real-world AI agent applications.

Hugging Face(31)

  1. F-GRPO: Don't Let Your Policy Learn the Obvious and Forget the Rare

    Reinforcement Learning with Verifiable Rewards (RLVR) is commonly based on group sampling to estimate advantages and stabilize policy updates. In practice, large group sizes are not feasible due to computational limits, which biases learning toward trajectories that are already likely. Smaller groups often miss rare-correct trajectories while still containing mixed rewards, concentrating probability on common solutions. We derive the probability that updates miss rare-correct modes as a function of group size, showing non-monotonic behavior, and characterize how updates redistribute mass within the correct set, revealing that unsampled-correct mass can shrink even as total correct mass grows. Motivated by this analysis, we propose a difficulty-aware advantage scaling coefficient, inspired by Focal loss, that down-weights updates on high-success prompts. The lightweight modification can be directly integrated into any group-relative RLVR algorithm such as GRPO, DAPO, and CISPO. On Qwen2.5-7B across in-domain and out-of-domain benchmarks, our method improves pass@256 from 64.1 rightarrow 70.3 (GRPO), 69.3 rightarrow 72.5 (DAPO), and 73.2 rightarrow 76.8 (CISPO), while preserving or improving pass@1, without increasing group size or computational cost.

  2. Baichuan-M3: Modeling Clinical Inquiry for Reliable Medical Decision-Making

    We introduce Baichuan-M3, a medical-enhanced large language model engineered to shift the paradigm from passive question-answering to active, clinical-grade decision support. Addressing the limitations of existing systems in open-ended consultations, Baichuan-M3 utilizes a specialized training pipeline to model the systematic workflow of a physician. Key capabilities include: (i) proactive information acquisition to resolve ambiguity; (ii) long-horizon reasoning that unifies scattered evidence into coherent diagnoses; and (iii) adaptive hallucination suppression to ensure factual reliability. Empirical evaluations demonstrate that Baichuan-M3 achieves state-of-the-art results on HealthBench, the newly introduced HealthBench-Hallu and ScanBench, significantly outperforming GPT-5.2 in clinical inquiry, advisory and safety. The models are publicly available at https://huggingface.co/collections/baichuan-inc/baichuan-m3.

  3. OdysseyArena: Benchmarking Large Language Models For Long-Horizon, Active and Inductive Interactions

    The rapid advancement of Large Language Models (LLMs) has catalyzed the development of autonomous agents capable of navigating complex environments. However, existing evaluations primarily adopt a deductive paradigm, where agents execute tasks based on explicitly provided rules and static goals, often within limited planning horizons. Crucially, this neglects the inductive necessity for agents to discover latent transition laws from experience autonomously, which is the cornerstone for enabling agentic foresight and sustaining strategic coherence. To bridge this gap, we introduce OdysseyArena, which re-centers agent evaluation on long-horizon, active, and inductive interactions. We formalize and instantiate four primitives, translating abstract transition dynamics into concrete interactive environments. Building upon this, we establish OdysseyArena-Lite for standardized benchmarking, providing a set of 120 tasks to measure an agent's inductive efficiency and long-horizon discovery. Pushing further, we introduce OdysseyArena-Challenge to stress-test agent stability across extreme interaction horizons (e.g., > 200 steps). Extensive experiments on 15+ leading LLMs reveal that even frontier models exhibit a deficiency in inductive scenarios, identifying a critical bottleneck in the pursuit of autonomous discovery in complex environments. Our code and data are available at https://github.com/xufangzhi/Odyssey-Arena

  4. AudioSAE: Towards Understanding of Audio-Processing Models with Sparse AutoEncoders

    Sparse Autoencoders (SAEs) are powerful tools for interpreting neural representations, yet their use in audio remains underexplored. We train SAEs across all encoder layers of Whisper and HuBERT, provide an extensive evaluation of their stability, interpretability, and show their practical utility. Over 50% of the features remain consistent across random seeds, and reconstruction quality is preserved. SAE features capture general acoustic and semantic information as well as specific events, including environmental noises and paralinguistic sounds (e.g. laughter, whispering) and disentangle them effectively, requiring removal of only 19-27% of features to erase a concept. Feature steering reduces Whisper's false speech detections by 70% with negligible WER increase, demonstrating real-world applicability. Finally, we find SAE features correlated with human EEG activity during speech perception, indicating alignment with human neural processing. The code and checkpoints are available at https://github.com/audiosae/audiosae_demo.

  5. On the Entropy Dynamics in Reinforcement Fine-Tuning of Large Language Models

    Entropy serves as a critical metric for measuring the diversity of outputs generated by large language models (LLMs), providing valuable insights into their exploration capabilities. While recent studies increasingly focus on monitoring and adjusting entropy to better balance exploration and exploitation in reinforcement fine-tuning (RFT), a principled understanding of entropy dynamics during this process is yet to be thoroughly investigated. In this paper, we establish a theoretical framework for analyzing the entropy dynamics during the RFT process, which begins with a discriminant expression that quantifies entropy change under a single logit update. This foundation enables the derivation of a first-order expression for entropy change, which can be further extended to the update formula of Group Relative Policy Optimization (GRPO). The corollaries and insights drawn from the theoretical analysis inspire the design of entropy control methods, and also offer a unified lens for interpreting various entropy-based methods in existing studies. We provide empirical evidence to support the main conclusions of our analysis and demonstrate the effectiveness of the derived entropy-discriminator clipping methods. This study yields novel insights into RFT training dynamics, providing theoretical support and practical strategies for optimizing the exploration-exploitation balance during LLM fine-tuning.

  6. Pisets: A Robust Speech Recognition System for Lectures and Interviews

    This work presents a speech-to-text system "Pisets" for scientists and journalists which is based on a three-component architecture aimed at improving speech recognition accuracy while minimizing errors and hallucinations associated with the Whisper model. The architecture comprises primary recognition using Wav2Vec2, false positive filtering via the Audio Spectrogram Transformer (AST), and final speech recognition through Whisper. The implementation of curriculum learning methods and the utilization of diverse Russian-language speech corpora significantly enhanced the system's effectiveness. Additionally, advanced uncertainty modeling techniques were introduced, contributing to further improvements in transcription quality. The proposed approaches ensure robust transcribing of long audio data across various acoustic conditions compared to WhisperX and the usual Whisper model. The source code of "Pisets" system is publicly available at GitHub: https://github.com/bond005/pisets.

  7. QuantaAlpha: An Evolutionary Framework for LLM-Driven Alpha Mining

    Financial markets are noisy and non-stationary, making alpha mining highly sensitive to noise in backtesting results and sudden market regime shifts. While recent agentic frameworks improve alpha mining automation, they often lack controllable multi-round search and reliable reuse of validated experience. To address these challenges, we propose QuantaAlpha, an evolutionary alpha mining framework that treats each end-to-end mining run as a trajectory and improves factors through trajectory-level mutation and crossover operations. QuantaAlpha localizes suboptimal steps in each trajectory for targeted revision and recombines complementary high-reward segments to reuse effective patterns, enabling structured exploration and refinement across mining iterations. During factor generation, QuantaAlpha enforces semantic consistency across the hypothesis, factor expression, and executable code, while constraining the complexity and redundancy of the generated factor to mitigate crowding. Extensive experiments on the China Securities Index 300 (CSI 300) demonstrate consistent gains over strong baseline models and prior agentic systems. When utilizing GPT-5.2, QuantaAlpha achieves an Information Coefficient (IC) of 0.1501, with an Annualized Rate of Return (ARR) of 27.75% and a Maximum Drawdown (MDD) of 7.98%. Moreover, factors mined on CSI 300 transfer effectively to the China Securities Index 500 (CSI 500) and the Standard & Poor's 500 Index (S&P 500), delivering 160% and 137% cumulative excess return over four years, respectively, which indicates strong robustness of QuantaAlpha under market distribution shifts.

  8. MOVA: Towards Scalable and Synchronized Video-Audio Generation

    Audio is indispensable for real-world video, yet generation models have largely overlooked audio components. Current approaches to producing audio-visual content often rely on cascaded pipelines, which increase cost, accumulate errors, and degrade overall quality. While systems such as Veo 3 and Sora 2 emphasize the value of simultaneous generation, joint multimodal modeling introduces unique challenges in architecture, data, and training. Moreover, the closed-source nature of existing systems limits progress in the field. In this work, we introduce MOVA (MOSS Video and Audio), an open-source model capable of generating high-quality, synchronized audio-visual content, including realistic lip-synced speech, environment-aware sound effects, and content-aligned music. MOVA employs a Mixture-of-Experts (MoE) architecture, with a total of 32B parameters, of which 18B are active during inference. It supports IT2VA (Image-Text to Video-Audio) generation task. By releasing the model weights and code, we aim to advance research and foster a vibrant community of creators. The released codebase features comprehensive support for efficient inference, LoRA fine-tuning, and prompt enhancement.

  9. Weak-Driven Learning: How Weak Agents make Strong Agents Stronger

    As post-training optimization becomes central to improving large language models, we observe a persistent saturation bottleneck: once models grow highly confident, further training yields diminishing returns. While existing methods continue to reinforce target predictions, we find that informative supervision signals remain latent in models' own historical weak states. Motivated by this observation, we propose WMSS (Weak Agents Can Make Strong Agents Stronger), a post-training paradigm that leverages weak checkpoints to guide continued optimization. By identifying recoverable learning gaps via entropy dynamics and reinforcing them through compensatory learning, WMSS enables strong agents to improve beyond conventional post-training saturation. Experiments on mathematical reasoning and code generation datasets show that agents trained with our approach achieve effective performance improvements, while incurring zero additional inference cost.

  10. AIRS-Bench: a Suite of Tasks for Frontier AI Research Science Agents

    LLM agents hold significant promise for advancing scientific research. To accelerate this progress, we introduce AIRS-Bench (the AI Research Science Benchmark), a suite of 20 tasks sourced from state-of-the-art machine learning papers. These tasks span diverse domains, including language modeling, mathematics, bioinformatics, and time series forecasting. AIRS-Bench tasks assess agentic capabilities over the full research lifecycle -- including idea generation, experiment analysis and iterative refinement -- without providing baseline code. The AIRS-Bench task format is versatile, enabling easy integration of new tasks and rigorous comparison across different agentic frameworks. We establish baselines using frontier models paired with both sequential and parallel scaffolds. Our results show that agents exceed human SOTA in four tasks but fail to match it in sixteen others. Even when agents surpass human benchmarks, they do not reach the theoretical performance ceiling for the underlying tasks. These findings indicate that AIRS-Bench is far from saturated and offers substantial room for improvement. We open-source the AIRS-Bench task definitions and evaluation code to catalyze further development in autonomous scientific research.

  11. Recurrent-Depth VLA: Implicit Test-Time Compute Scaling of Vision-Language-Action Models via Latent Iterative Reasoning

    Current Vision-Language-Action (VLA) models rely on fixed computational depth, expending the same amount of compute on simple adjustments and complex multi-step manipulation. While Chain-of-Thought (CoT) prompting enables variable computation, it scales memory linearly and is ill-suited for continuous action spaces. We introduce Recurrent-Depth VLA (RD-VLA), an architecture that achieves computational adaptivity via latent iterative refinement rather than explicit token generation. RD-VLA employs a recurrent, weight-tied action head that supports arbitrary inference depth with a constant memory footprint. The model is trained using truncated backpropagation through time (TBPTT) to efficiently supervise the refinement process. At inference, RD-VLA dynamically allocates compute using an adaptive stopping criterion based on latent convergence. Experiments on challenging manipulation tasks show that recurrent depth is critical: tasks that fail entirely (0 percent success) with single-iteration inference exceed 90 percent success with four iterations, while simpler tasks saturate rapidly. RD-VLA provides a scalable path to test-time compute in robotics, replacing token-based reasoning with latent reasoning to achieve constant memory usage and up to 80x inference speedup over prior reasoning-based VLA models. Project page: https://rd-vla.github.io/

  12. LLaDA2.1: Speeding Up Text Diffusion via Token Editing

    While LLaDA2.0 showcased the scaling potential of 100B-level block-diffusion models and their inherent parallelization, the delicate equilibrium between decoding speed and generation quality has remained an elusive frontier. Today, we unveil LLaDA2.1, a paradigm shift designed to transcend this trade-off. By seamlessly weaving Token-to-Token (T2T) editing into the conventional Mask-to-Token (M2T) scheme, we introduce a joint, configurable threshold-decoding scheme. This structural innovation gives rise to two distinct personas: the Speedy Mode (S Mode), which audaciously lowers the M2T threshold to bypass traditional constraints while relying on T2T to refine the output; and the Quality Mode (Q Mode), which leans into conservative thresholds to secure superior benchmark performances with manageable efficiency degrade. Furthering this evolution, underpinned by an expansive context window, we implement the first large-scale Reinforcement Learning (RL) framework specifically tailored for dLLMs, anchored by specialized techniques for stable gradient estimation. This alignment not only sharpens reasoning precision but also elevates instruction-following fidelity, bridging the chasm between diffusion dynamics and complex human intent. We culminate this work by releasing LLaDA2.1-Mini (16B) and LLaDA2.1-Flash (100B). Across 33 rigorous benchmarks, LLaDA2.1 delivers strong task performance and lightning-fast decoding speed. Despite its 100B volume, on coding tasks it attains an astounding 892 TPS on HumanEval+, 801 TPS on BigCodeBench, and 663 TPS on LiveCodeBench.

Solidot(36)

  1. 为遵守美国新规汽车厂商加紧移除中国软件代码

    美国最新规定要求汽车厂商需要向美国政府证明,自 3 月 17 日起,其产品的核心部件不包含在中国、或由中国公司编写的代码。该规定还涵盖高级自动驾驶软件,并将从 2029 年扩大到硬件。此举旨在防止车载摄像头、麦克风和 GPS 追踪系统被外国对手利用,是美国试图与中国供应链脱钩的试金石。汽车创新联盟(Alliance for Automotive Innovation)政策主管 Hilary Cain 表示,这是数十年来最具影响力和复杂性的汽车监管政策之一,需要对供应链进行深入审查,严格遵守合规时间表。

  2. COVID-19 疫情期间天空更干净但甲烷排放飙升

    2020 年春天,COVID-19 疫情导致全球工业和旅游业停滞,卫星记录到二氧化氮——内燃机和重工业的副产品——浓度急剧下降,全球空气质量为数十年以来最佳。然而与此同时,温室气体甲烷的浓度出现飙升,当年的甲烷增长率达到了 16.2ppb,创 1980 年代有记录以来最高。根据发表在《科学》期刊上的一项研究,北京大学研究人员认为部分原因就是大气中的氮氧化物减少所致。大气中的甲烷会被羟基自由基分解,转变为水蒸气和二氧化碳。羟基自由基作为大气甲烷清除剂必须通过一系列由太阳光引发的化学反应不断补充,而反应的关键成分是氮氧化物,疫情期间采取的封锁政策导致全球氮氧化物浓度下降约 15%-20%,进而导致羟基自由基生成速度急降,甲烷因此在大气中停留的时间延长,加剧全球暖化。增加的甲烷排放主要来自微生物。新冠疫情期间恰逢拉尼娜现象,它通常会导致热带地区降雨量增加,在水涝缺氧的环境中,微生物产甲烷菌大量繁殖,加速产生甲烷。研究人员利用卫星数据跟踪到新增甲烷主要来自热带非洲和东南亚的大片湿地,这些地区的湿地导致 2020-2022 年间全球甲烷排放量增加约 30%。

  3. Linux From Scratch 放弃 System V 版本

    Linux From Scratch(LFS)项目提供了从源代码构建定制 Linux 系统的逐步指南。项目提供了 System V 和 systemd 两个版本,允许用户选择不同的初始化系统。现在 LFS 项目宣布将不再提供 System V 版本,第一个理由是工作量太大,项目志愿者们不堪重负——LFS 包含 88 个软件包,Beyond Linux From Scratch(BLFS) 包含逾 1000 个软件包,更新软件包需要同时检查与 System V 和 systemd 的兼容性;第二个原因是桌面环境 GNOME 和 KDE Plasma 未来都只支持 systemd 了。预计 3 月释出的 LFS 13.0 将只有 systemd 版本。

  4. LineageOS 23.2 释出

    LineageOS 23.2 释出。主要变化包括:支持 Material 3 Expressive 视觉风格,支持完全自定义快速设置面板,扩展深色主题,更强大的私人空间文件管理工具等等。开发者称,Android 开源项目 AOSP 过去几年的发布节奏转向了季度发布,而 Google 最近宣布从季度发布转向半年发布一次,LineageOS 项目也将采用六个月的发布节奏。

  5. Ardour 9.0 释出

    开源音频工作站 Ardour 释出了 9.0 版本。这是一次重大更新,带来了用户长期期待的新功能。主要变化包括: Region FX、剪辑录制、触控式 GUI、钢琴卷帘窗口、剪辑编辑等等,此外还修复了数十个 bug,新的 MIDI 绑定映射,改进了大多数 macOS 用户的 GUI 性能。开发者表示期待用户的反馈。

  6. Linux 6.19 释出

    Linus Torvalds 在内核邮件列表上宣布释出 Linux 6.19,他同时确认下一个版本将是 Linux 7.0。Linux 6.19 的主要特性包括:初步支持英特尔的线性地址空间分离(address-space separation)功能,支持 Arm 内存系统资源分区和监控、listns()系统调用、新实现的可重启序列、ext4 文件系统支持大块、改进内存安全、实时更新协调器,等等。更多可浏览 KernelNewbies 6.19。

  7. 2025 年有 410 艘油轮被遗弃

    Ivan 所在的油轮去年 11 月运载 75 万桶俄罗斯原油从远东前往中国,国际运输工人联盟(International Transport Workers' Federation,ITF) 在得知船员已经几个月没有拿到工资后于 12 月宣布油轮废弃。这艘油轮目前停留在国际水域,由于受到多方密切关注中国不允许其靠岸。ITF 已经帮助 Ivan 和其他船员拿到了 12 月的工资,运送了食物、饮用水和其它生活必需品。部分船员已经回国,包括 Ivan 在内的大部分船员仍然滞留在船上。根据 ITF 的统计,2016 年全球共有 20 艘船遗弃。但到 2025 年这一数字飙升至 410 艘,6223 名商船海员沦为受害者。这两个数字比 2024 年增长了近三分之一。油轮遗弃的主要原因是地缘政治不稳定。类似 Ivan 被困的油轮,大部分船东身份不明、船龄老旧、可能没有保险,在监管不严的国家如巴拿马、利比里亚和马绍尔群岛注册。冈比亚在 2023 年没有任何油轮,但到了 2025 年 3 月,该国注册的油轮达到了 35 艘。根据国际海事组织(IMO)的指导方针,如果海员至少两个月合同工资未支付,就构成了遗弃。2025 年因油轮遗弃印度籍海员受影响人数最多有 1125 人,占总数的 18%。菲律宾(539人)和叙利亚(309人)位列第二和第三。

  8. 电动汽车有助于改善空气质量

    根据发表在《The Lancet Planetary Health》期刊上的一项研究,电动汽车有助于改善空气质量。研究调查了纯电和插电混动汽车数量增加对加州空气污染的影响。加州是美国插电汽车保有量最大的州,数量已足以对空气质量产生积极影响。研究利用卫星数据,通过 NO2 吸收和反射太阳光追踪 NO2 水平,结果显示,2019-2023 年间每新增 200 辆纯电或插电混动汽车,NO2 水平下降 1.1%。NO2 会触发哮喘支气管炎,增加患心脏病和中风风险。研究还证实,汽油汽车增加的社区污染物排放量会上升。

  9. 素食幼儿的生长速度与杂食幼儿相同

    生于素食家庭的婴儿在早期可能略微偏瘦,但到两岁时,体重就能赶上杂食家庭的同龄婴儿。以色列内盖夫本-古里安大学的研究人员分析了 2014-2023 年间从以色列国家家庭护理中心收集的 120 万名婴儿的数据,记录了每个婴儿从出生到 24 个月的身长、体重和头围。研究团队将生长数据与婴儿父母报告的饮食类型进行了比较。绝大多数家庭表示他们是杂食家庭,只有 1.2% 的家庭自称是素食者,0.3% 的家庭自称是纯素食者。素食家庭和纯素食家庭中仍然有大约 1.8 万名婴儿。研究人员按照饮食类型将婴儿分为三组。在出生后的 60 天内,三组婴儿的身长、头围以及生长发育受限的发生率都相似。来自无肉家庭(尤其是纯素食家庭)的婴儿体重偏轻的可能性较高。到 2 岁左右,这些差异基本消失,三组幼儿的生长指标趋于一致。研究人员指出,这项研究应该能让人放心,无肉饮食可以支持婴儿早期的健康成长,他同时指出,这些饮食情况是由父母自行报告的,这可能会影响结果的准确性。

  10. YouTube Music 限制免费用户查看歌词

    Google 旗下的 YouTube Music 服务限制免费用户查看歌词。免费用户报告他们能查看的歌词次数有限制,他们收到了剩余多少次查看的警告。Google 尚未正式宣布 YouTube Music 歌词功能只提供给付费订阅用户。Google 发言人在回应中表示他们还在测试,尚未做出最终决定。YouTube Video 和 Music 的月费为 14 美元,YouTube Music 月费为 11 美元。音乐流媒体巨头 Spotify 也曾在 2024 年限制用户访问歌词,因用户反应强烈它取消了限制。

  11. 美国 AI 初创公司盛行 996 工作制

    996 工作制在中国饱受争议,而美国 AI 初创公司正将这一工作制视为特色。纽约 AI 初创公司 Rilla 在招聘广告中警告应聘者每周工作时间可能长达 70 小时。只有 7 个人的初创公司 Browser-Use 开发浏览器与 AI 的交互工具,它将一个共享空间作为办公以及生活的场所,更进一步的模糊了工作和生活的界限。风投 Menlo Ventures 的合伙人 Deedy Das 指出,长时间工作并不意味着员工工作效率高或有更高的生产力,这么做会疏远有家庭的员工以及富有经验的年长员工。长时间工作还会导致职业倦怠。他认为公司创始人长时间工作无可厚非,因为攸关自身利益,如果公司成功他们会变得非常富有。密歇根州立大学的研究发现,一名每周工作 70 小时的员工的产出与一名每周工作 50 小时的员工几乎相同。

  12. Google 计划发行百年债券为 AI 筹集资金

    美国主要科技巨头计划今年在 AI 数据中心上投资 7000 亿美元,而为了筹集资金这些公司转向了债券市场,其中 Google 已经与多家银行达成协议,计划发行罕见的期限为一百年的债券。历史上的百年债券很多都以失败告终,因为发行方在百年期限届满前就破产了。科技巨头发行的债券期限多数最长为 40 年,上一次有科技巨头发行百年债券是 1997 年的摩托罗拉,这也是摩托罗拉最后一次被视为行业巨头。当年摩托罗拉的企业品牌在美国排名第一,超过了微软,但如今其市值排名第 232 位。