WEEK · 2025-W47

Weekly Digest — 2025-W47

119 unique stories (2025-11-172025-11-23), aggregated across 8 sources.

Hacker News(42)

  1. A graph explorer of the Epstein emails (epstein-doc-explorer-1.onrender.com)
  2. My stages of learning to be a socially normal person (sashachapin.substack.com)
  3. An official atlas of North Korea (www.cartographerstale.com)
  4. Azure hit by 15 Tbps DDoS attack using 500k IP addresses (www.bleepingcomputer.com)
  5. Cities panic over having to release mass surveillance recordings (neuburger.substack.com)
  6. Israeli-founded app preloaded on Samsung phones is attracting controversy (www.sammobile.com)
  7. Rebecca Heineman – from homelessness to porting Doom (corecursive.com)
  8. Blender 5.0 (www.blender.org)
  9. GitHub: Git operation failures (www.githubstatus.com)
  10. I am stepping down as the CEO of Mastodon (blog.joinmastodon.org)
  11. Pebble, Rebble, and a path forward (ericmigi.com)
  12. Google Antigravity (antigravity.google)

GitHub Trending(6)

  1. sansan0 / TrendRadar

    🎯 告别信息过载,AI 助你看懂新闻资讯热点,简单的舆情监控分析 - 多平台热点聚合+基于 MCP 的AI分析工具。监控35个平台(抖音、知乎、B站、华尔街见闻、财联社等),智能筛选+自动推送+AI对话分析(用自然语言深度挖掘新闻:趋势追踪、情感分析、相似检索等13种工具)。支持企业微信/飞书/钉钉/Telegram/邮件/ntfy推送,30秒网页部署,1分钟手机通知,无需编程。支持Docker部署⭐ 让算法为你服务,用AI理解热点

  2. google / adk-go

    An open-source, code-first Go toolkit for building, evaluating, and deploying sophisticated AI agents with flexibility and control.

  3. TapXWorld / ChinaTextbook

    所有小初高、大学PDF教材。

  4. yeongpin / cursor-free-vip

    [Support 0.49.x](Reset Cursor AI MachineID & Bypass Higher Token Limit) Cursor Ai ,自动重置机器ID , 免费升级使用Pro功能: You've reached your trial request limit. / Too many free trial accounts used on this machine. Please upgrade to pro. We have this limit in place to prevent abuse. Please let us know if you believe this is a mistake.

  5. nvm-sh / nvm

    Node Version Manager - POSIX-compliant bash script to manage multiple active node.js versions

  6. traefik / traefik

    The Cloud Native Application Proxy

Hugging Face(31)

  1. DoPE: Denoising Rotary Position Embedding

    Rotary Position Embedding (RoPE) in Transformer models has inherent limits that weaken length extrapolation. We reinterpret the attention map with positional encoding as a noisy feature map, and propose Denoising Positional Encoding (DoPE), a training-free method based on truncated matrix entropy to detect outlier frequency bands in the feature map. Leveraging the noise characteristics of the feature map, we further reparameterize it with a parameter-free Gaussian distribution to achieve robust extrapolation. Our method theoretically reveals the underlying cause of the attention sink phenomenon and its connection to truncated matrix entropy. Experiments on needle-in-a-haystack and many-shot in-context learning tasks demonstrate that DoPE significantly improves retrieval accuracy and reasoning stability across extended contexts (up to 64K tokens). The results show that the denoising strategy for positional embeddings effectively mitigates attention sinks and restores balanced attention patterns, providing a simple yet powerful solution for improving length generalization. Our project page is Project: https://The-physical-picture-of-LLMs.github.io

  2. WEAVE: Unleashing and Benchmarking the In-context Interleaved Comprehension and Generation

    Recent advances in unified multimodal models (UMMs) have enabled impressive progress in visual comprehension and generation. However, existing datasets and benchmarks focus primarily on single-turn interactions, failing to capture the multi-turn, context-dependent nature of real-world image creation and editing. To address this gap, we present WEAVE, the first suite for in-context interleaved cross-modality comprehension and generation. Our suite consists of two complementary parts. WEAVE-100k is a large-scale dataset of 100K interleaved samples spanning over 370K dialogue turns and 500K images, covering comprehension, editing, and generation tasks that require reasoning over historical context. WEAVEBench is a human-annotated benchmark with 100 tasks based on 480 images, featuring a hybrid VLM judger evaluation framework based on both the reference image and the combination of the original image with editing instructions that assesses models' abilities in multi-turn generation, visual memory, and world-knowledge reasoning across diverse domains. Experiments demonstrate that training on WEAVE-100k enables vision comprehension, image editing, and comprehension-generation collaboration capabilities. Furthermore, it facilitates UMMs to develop emergent visual-memory capabilities, while extensive evaluations on WEAVEBench expose the persistent limitations and challenges of current approaches in multi-turn, context-aware image generation and editing. We believe WEAVE provides a view and foundation for studying in-context interleaved comprehension and generation for multi-modal community.

  3. GGBench: A Geometric Generative Reasoning Benchmark for Unified Multimodal Models

    The advent of Unified Multimodal Models (UMMs) signals a paradigm shift in artificial intelligence, moving from passive perception to active, cross-modal generation. Despite their unprecedented ability to synthesize information, a critical gap persists in evaluation: existing benchmarks primarily assess discriminative understanding or unconstrained image generation separately, failing to measure the integrated cognitive process of generative reasoning. To bridge this gap, we propose that geometric construction provides an ideal testbed as it inherently demands a fusion of language comprehension and precise visual generation. We introduce GGBench, a benchmark designed specifically to evaluate geometric generative reasoning. It provides a comprehensive framework for systematically diagnosing a model's ability to not only understand and reason but to actively construct a solution, thereby setting a more rigorous standard for the next generation of intelligent systems. Project website: https://opendatalab-raiser.github.io/GGBench/.

  4. UI2Code^N: A Visual Language Model for Test-Time Scalable Interactive UI-to-Code Generation

    User interface (UI) programming is a core yet highly complex part of modern software development. Recent advances in visual language models (VLMs) highlight the potential of automatic UI coding, but current approaches face two key limitations: multimodal coding capabilities remain underdeveloped, and single-turn paradigms make little use of iterative visual feedback. We address these challenges with an interactive UI-to-code paradigm that better reflects real-world workflows and raises the upper bound of achievable performance. Under this paradigm, we present UI2Code^N, a visual language model trained through staged pretraining, fine-tuning, and reinforcement learning to achieve foundational improvements in multimodal coding. The model unifies three key capabilities: UI-to-code generation, UI editing, and UI polishing. We further explore test-time scaling for interactive generation, enabling systematic use of multi-turn feedback. Experiments on UI-to-code and UI polishing benchmarks show that UI2Code^N establishes a new state of the art among open-source models and achieves performance comparable to leading closed-source models such as Claude-4-Sonnet and GPT-5. Our code and models are available at https://github.com/zai-org/UI2Code_N.

  5. AIonopedia: an LLM agent orchestrating multimodal learning for ionic liquid discovery

    The discovery of novel Ionic Liquids (ILs) is hindered by critical challenges in property prediction, including limited data, poor model accuracy, and fragmented workflows. Leveraging the power of Large Language Models (LLMs), we introduce AIonopedia, to the best of our knowledge, the first LLM agent for IL discovery. Powered by an LLM-augmented multimodal domain foundation model for ILs, AIonopedia enables accurate property predictions and incorporates a hierarchical search architecture for molecular screening and design. Trained and evaluated on a newly curated and comprehensive IL dataset, our model delivers superior performance. Complementing these results, evaluations on literature-reported systems indicate that the agent can perform effective IL modification. Moving beyond offline tests, the practical efficacy was further confirmed through real-world wet-lab validation, in which the agent demonstrated exceptional generalization capabilities on challenging out-of-distribution tasks, underscoring its ability to accelerate real-world IL discovery.

  6. Virtual Width Networks

    We introduce Virtual Width Networks (VWN), a framework that delivers the benefits of wider representations without incurring the quadratic cost of increasing the hidden size. VWN decouples representational width from backbone width, expanding the embedding space while keeping backbone compute nearly constant. In our large-scale experiment, an 8-times expansion accelerates optimization by over 2 times for next-token and 3 times for next-2-token prediction. The advantage amplifies over training as both the loss gap grows and the convergence-speedup ratio increases, showing that VWN is not only token-efficient but also increasingly effective with scale. Moreover, we identify an approximately log-linear scaling relation between virtual width and loss reduction, offering an initial empirical basis and motivation for exploring virtual-width scaling as a new dimension of large-model efficiency.

  7. P1: Mastering Physics Olympiads with Reinforcement Learning

    Recent progress in large language models (LLMs) has moved the frontier from puzzle-solving to science-grade reasoning-the kind needed to tackle problems whose answers must stand against nature, not merely fit a rubric. Physics is the sharpest test of this shift, which binds symbols to reality in a fundamental way, serving as the cornerstone of most modern technologies. In this work, we manage to advance physics research by developing large language models with exceptional physics reasoning capabilities, especially excel at solving Olympiad-level physics problems. We introduce P1, a family of open-source physics reasoning models trained entirely through reinforcement learning (RL). Among them, P1-235B-A22B is the first open-source model with Gold-medal performance at the latest International Physics Olympiad (IPhO 2025), and wins 12 gold medals out of 13 international/regional physics competitions in 2024/2025. P1-30B-A3B also surpasses almost all other open-source models on IPhO 2025, getting a silver medal. Further equipped with an agentic framework PhysicsMinions, P1-235B-A22B+PhysicsMinions achieves overall No.1 on IPhO 2025, and obtains the highest average score over the 13 physics competitions. Besides physics, P1 models also present great performance on other reasoning tasks like math and coding, showing the great generalibility of P1 series.

  8. Uni-MoE-2.0-Omni: Scaling Language-Centric Omnimodal Large Model with Advanced MoE, Training and Data

    We present Uni-MoE 2.0 from the Lychee family. As a fully open-source omnimodal large model (OLM), it substantially advances Lychee's Uni-MoE series in language-centric multimodal understanding, reasoning, and generating. Based on the Qwen2.5-7B dense architecture, we build Uni-MoE-2.0-Omni from scratch through three core contributions: dynamic-capacity Mixture-of-Experts (MoE) design, a progressive training strategy enhanced with an iterative reinforcement strategy, and a carefully curated multimodal data matching technique. It is capable of omnimodal understanding, as well as generating images, text, and speech. Architecturally, our new MoE framework balances computational efficiency and capability for 10 cross-modal inputs using shared, routed, and null experts, while our Omni-Modality 3D RoPE ensures spatio-temporal cross-modality alignment in the self-attention layer. For training, following cross-modal pretraining, we use a progressive supervised fine-tuning strategy that activates modality-specific experts and is enhanced by balanced data composition and an iterative GSPO-DPO method to stabilise RL training and improve reasoning. Data-wise, the base model, trained on approximately 75B tokens of open-source multimodal data, is equipped with special speech and image generation tokens, allowing it to learn these generative tasks by conditioning its outputs on linguistic cues. Extensive evaluation across 85 benchmarks demonstrates that our model achieves SOTA or highly competitive performance against leading OLMs, surpassing Qwen2.5-Omni (trained with 1.2T tokens) on over 50 of 76 benchmarks. Key strengths include video understanding (+7% avg. of 8), omnimodallity understanding (+7% avg. of 4), and audiovisual reasoning (+4%). It also advances long-form speech processing (reducing WER by 4.2%) and leads in low-level image processing and controllable generation across 5 metrics.

  9. MiroThinker: Pushing the Performance Boundaries of Open-Source Research Agents via Model, Context, and Interactive Scaling

    We present MiroThinker v1.0, an open-source research agent designed to advance tool-augmented reasoning and information-seeking capabilities. Unlike previous agents that only scale up model size or context length, MiroThinker explores interaction scaling at the model level, systematically training the model to handle deeper and more frequent agent-environment interactions as a third dimension of performance improvement. Unlike LLM test-time scaling, which operates in isolation and risks degradation with longer reasoning chains, interactive scaling leverages environment feedback and external information acquisition to correct errors and refine trajectories. Through reinforcement learning, the model achieves efficient interaction scaling: with a 256K context window, it can perform up to 600 tool calls per task, enabling sustained multi-turn reasoning and complex real-world research workflows. Across four representative benchmarks-GAIA, HLE, BrowseComp, and BrowseComp-ZH-the 72B variant achieves up to 81.9%, 37.7%, 47.1%, and 55.6% accuracy respectively, surpassing previous open-source agents and approaching commercial counterparts such as GPT-5-high. Our analysis reveals that MiroThinker benefits from interactive scaling consistently: research performance improves predictably as the model engages in deeper and more frequent agent-environment interactions, demonstrating that interaction depth exhibits scaling behaviors analogous to model size and context length. These findings establish interaction scaling as a third critical dimension for building next-generation open research agents, complementing model capacity and context windows.

  10. Souper-Model: How Simple Arithmetic Unlocks State-of-the-Art LLM Performance

    Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse domains, but their training remains resource- and time-intensive, requiring massive compute power and careful orchestration of training procedures. Model souping-the practice of averaging weights from multiple models of the same architecture-has emerged as a promising pre- and post-training technique that can enhance performance without expensive retraining. In this paper, we introduce Soup Of Category Experts (SoCE), a principled approach for model souping that utilizes benchmark composition to identify optimal model candidates and applies non-uniform weighted averaging to maximize performance. Contrary to previous uniform-averaging approaches, our method leverages the observation that benchmark categories often exhibit low inter-correlations in model performance. SoCE identifies "expert" models for each weakly-correlated category cluster and combines them using optimized weighted averaging rather than uniform weights. We demonstrate that the proposed method improves performance and robustness across multiple domains, including multilingual capabilities, tool calling, and math and achieves state-of-the-art results on the Berkeley Function Calling Leaderboard.

  11. Part-X-MLLM: Part-aware 3D Multimodal Large Language Model

    We introduce Part-X-MLLM, a native 3D multimodal large language model that unifies diverse 3D tasks by formulating them as programs in a structured, executable grammar. Given an RGB point cloud and a natural language prompt, our model autoregressively generates a single, coherent token sequence encoding part-level bounding boxes, semantic descriptions, and edit commands. This structured output serves as a versatile interface to drive downstream geometry-aware modules for part-based generation and editing. By decoupling the symbolic planning from the geometric synthesis, our approach allows any compatible geometry engine to be controlled through a single, language-native frontend. We pre-train a dual-encoder architecture to disentangle structure from semantics and instruction-tune the model on a large-scale, part-centric dataset. Experiments demonstrate that our model excels at producing high-quality, structured plans, enabling state-of-the-art performance in grounded Q\&A, compositional generation, and localized editing through one unified interface. Project page: https://chunshi.wang/Part-X-MLLM/

  12. MMaDA-Parallel: Multimodal Large Diffusion Language Models for Thinking-Aware Editing and Generation

    While thinking-aware generation aims to improve performance on complex tasks, we identify a critical failure mode where existing sequential, autoregressive approaches can paradoxically degrade performance due to error propagation. To systematically analyze this issue, we propose ParaBench, a new benchmark designed to evaluate both text and image output modalities. Our analysis using ParaBench reveals that this performance degradation is strongly correlated with poor alignment between the generated reasoning and the final image. To resolve this, we propose a parallel multimodal diffusion framework, MMaDA-Parallel, that enables continuous, bidirectional interaction between text and images throughout the entire denoising trajectory. MMaDA-Parallel is trained with supervised finetuning and then further optimized by Parallel Reinforcement Learning (ParaRL), a novel strategy that applies semantic rewards along the trajectory to enforce cross-modal consistency. Experiments validate that our model significantly improves cross-modal alignment and semantic consistency, achieving a 6.9\% improvement in Output Alignment on ParaBench compared to the state-of-the-art model, Bagel, establishing a more robust paradigm for thinking-aware image synthesis. Our code is open-sourced at https://github.com/tyfeld/MMaDA-Parallel

Solidot(40)

  1. AMD 占 x86 CPU 市场的份额突破四分之一

    根据 Mercury Research 的数据, 2025 年第三季度 AMD 占 x86 CPU 市场的份额突破四分之一。x86 CPU 的三季度出货量与二季度持平,但主要是英特尔出货量疲软,AMD 占 x86 客户端和服务器 CPU 出货量突破 25% 达到 25.6%,比上一季度的 24.2% 增加了 1.4%,英特尔仍然占 74.4%,AMD 的桌面 x86 CPU 出货量占比超过 33%。如果加上嵌入式系统、物联网和游戏机 SoC,AMD 占到了 30.9%,而去年第三季度只有 25%。

  2. 比特币币值一个月下跌逾 3 万美元

    比特币币值一个月内跌掉了一年内的所有涨幅。10 月 6 日比特币币值创下了 126,251 美元的历史记录,但四天后特朗普的关税言论引发了全球市场暴跌,上周日比特币币值跌至了 93,714 美元,跌破了去年年底的币值水平,抹掉了过去一年的所有涨幅,下跌逾 3.2 万美元。加密货币资产管理公司 Bitwise Asset Management 的首席投资官 Matthew Hougan 称,过去一个月大卖家悄悄撤离了,市场失去了推动价格上涨的资金流动支撑。此次抛售是长期持有者获利了结、机构资金外流、宏观经济不确定性,以及杠杆多头头寸被清零等多种因素共同作用的结果。

  3. 企业数据外泄的主要源头是拷贝黏贴

    根据 LayerX 的报告《Browser Security Report 2025》,企业数据外泄更常见源头如今是拷贝黏贴,原因是生成式 AI(GenAI)的流行,77% 的员工会将数据粘贴到 AI 提示框中,32% 的企业账户到非企业账户拷贝粘贴操作发生在 GenAI 中。LayerX CEO Or Eshed 表示传统上防止企业数据外泄是针对电子邮件、文件共享和批准的 SaaS 服务而构建的,未预料到拷贝粘贴到浏览器提示框会成为主要泄露途径。数据显示,GenAI 占企业应用使用量的 11%,45% 的员工经常使用 AI 工具,67% 的 AI 工具是通过个人账户访问的,而 ChatGPT 的使用量占所有使用量的 92%。

  4. 太阳能和风能能满足 2025 年新增能源需求

    能源智库 Ember 的数据显示,太阳能和风能的新增发电量足以满足今年前三季度新增电力需求。Ember 预测化石能源发电量全年将持平,这将是疫情爆发以来化石能源发电量首次零增长。太阳能发电量比去年同期增长 498 TWh(+31%),超过 2024 年全年太阳能发电量;风能发电量增加 137TWh(+7.6%)。两者提供了 635 TWh 的新增电力,超过全球新增电力需求 603 TWh(+2.7%)。太阳能和风能今年前三个季度再全球电力供应中占到 17.6%,高于去年同期的 15.2%。可再生能源(包括太阳能、风能、水力、生物质能和地热能)在全球电力供应中的占比达到 43%。化石燃料的占比则从 58.7% 下滑至 57.1%。这一能源转变部分由中国和印度驱动,中国的化石燃料发电量下降 52 TWh(-1.1%),印度化石燃料发电量下降 34TWh(-3.3%)。

  5. 美国比特币矿场转向 AI 数据中心

    比特币矿场 Bitfarm 宣布计划在 2027 年前将其业务从加密货币挖矿转型为 AI 数据中心服务。Bitfarm 虽然不是美国最大的比特币矿场,但其运营规模仍然相当可观,拥有 12 个专门挖比特币的数据中心,拥有 341 MW 电力资源,足以部署数千英伟达 GB300 NVL72 服务器机架。Bitfarm CEO Ben Gagnon 认为 AI 数据中心能比挖比特币产生更高的营业收入。Bitfarm 在第三季度净亏损 4600 万美元,比去年同期增加近 91%。尽管比特币在 10 月初创下历史新高,但其波动性意味着该公司无法持续依赖比特币支付运营成本。这次转型被认为有巨大风险,因为 AI 行业被普遍认为存在泡沫。

  6. 中美 AI 冷战

    WSJ 报道称,恐惧驱动了中美 AI 冷战。美国目前在 AI 领域拥有领先优势,拥有最先进最强大的 AI 模型,最先进的 AI 芯片,私人投资者仅仅今年上半年就向 AI 创业公司投资了 1040 亿美元。但中国拥有更多的 AI 工程师、更低的成本,更快的发展速度,以及更充足的能源,正利用国家主导优势在能源价格廉价的内蒙古等地加速建造计算集群,计划到 2028 年将数百个数据中心连接起来,建立一个称之为“国家云”的共享计算池。中国还向电网投入数千亿美元支持 AI 训练和普及。根据 Chatbot Arena 的数据,中国 AI 模型在从编码到视频生成的任务中都排名前列。前 OpenAI 董事 Helen Toner 指出,美国人并不知道通过更先进的芯片提升算力能持续产生更强大的 AI 模型。如果性能停滞不前,即便 OpenAI 等公司投入巨资,中国仍有机会与之竞争。

  7. 哈佛持有 4.42 亿美元的加密货币

    根据递交到 SEC 的文件,哈佛大学持有价值逾 4.42 亿美元的加密货币,持有形式是贝莱德(BlackRock)发行的 iShares Bitcoin Trust(IBIT) ETF。布朗大学也披露持有约 1400 万美元的加密货币 ETF。哈佛大学在比特币 ETF 上的投资超过了其持有的任何股票,包括其持有的英伟达、微软和亚马逊等主流公司的股份。4.42 亿美元仅占哈佛大学近 570 亿美元捐赠基金的不到 1%。尽管比特币币值近期下跌,但 IBIT 的总市值超过了 700 亿美元。哈佛的投资再次表明加密货币正日益被机构接受。

  8. Debian Libre Live Images 项目发布首个版本

    自 2022 年以来 Debian Live Images 都包含了非自由固件,Debian Libre Live Images 项目旨在让用户在不安装任何非自由软件的情况下运行和安装 Debian 操作系统。项目目前提供了 64 位 x86 CPU (amd64) 的 Live ISO 镜像。开发者表示,作为首个公开版本,镜像可能存在问题,建议用户在使用前查阅已知问题列表。

  9. Cloudflare 宕机影响整个互联网

    在亚马逊 AWS 和微软 Azure 之后,互联网再次体验到单点故障对整个互联网基础设施的影响:Cloudflare 宕机事故影响了整个互联网。Cloudflare 提供了多种服务,包括 DDoS 保护、网页应用防火墙、公共 DNS 解析器、反向代理和 CDN 等,它的服务和 AWS 以及 Azure 一样被广泛使用,它的故障也波及了整个互联网。Cloudflare 的状态页面显示它已经知道问题,并且处于“我们正在继续调查问题”之中。专家表示此类宕机事件凸显了现代互联网的脆弱性,突出了支持互联网的少数几家公司出现问题可能会造成严重破坏。

  10. 浣熊显示驯化的早期迹象

    根据发表在《Frontiers in Zoology》期刊上的一项研究,生活在城市的浣熊显示出驯化的早期迹象。城里的残羹剩饭为动物提供了取之不尽的美食,但对动物而言城市和它们生活的野外环境有很大区别,为适应城市生活它们面临巨大的选择适应压力。驯化并非只是人类捕捉野生动物然后选择性繁殖,野生动物适应人类环境也是一种驯化。驯化的动物相比野生动物有些显著差异的生物特征,如脸短、头小、耳朵下垂以及白色皮毛斑块,这些特征被称为驯化综合征。生物学家 Raffaela Lesch 和同事分析了 iNaturalist 上的近两万张浣熊照片,发现城市浣熊的吻部比农村同类短 3.5%。研究人员计划接下来捕捉城市浣熊,观察它们是否比农村浣熊更友善。

  11. 很多时候放弃是最明智的选择

    你可能从儿时起一直被告诫要“坚持下去”,仿佛命悬一线时松手就会死亡。根据发表在《Nature Human Behaviour》期刊上的元分析研究,很多时候放弃其实是最明智的选择。研究人员分析了 235 项研究,涉及在遇到成功障碍后人们如何调整目标。研究作者 Hugh Riddell 称,坚持不可能实现的目标会造成严重影响,增加压力、幸福感下降,甚至引发健康问题。放弃旧目标重新转向新目标能恢复目标感和幸福感。研究还发现,放弃目标与压力、焦虑和抑郁的显著降低相关。

  12. Sundar Pichai 称如果 AI 泡沫破裂没有公司能免受影响

    Alphabet CEO Sundar Pichai 在接受采访时称如果 AI 泡沫破裂没有公司能免受影响。他承认目前的 AI 热存在非理性因素。当被问及 Google 能否免受 AI 泡沫破裂的影响,Pichai 表示能承受但不可能免受影响。Alphabet 的股价在七个月内翻了一番达到 3.5 万亿美元。Pichai 表示 Google 独特的“全栈”技术模式——从芯片到YouTube 数据到模型和前沿科学——意味着它更有能力应对 AI 市场的任何动荡。他称 AI 是人类迄今创造的“最深刻的技术”,“我们将不得不应对社会变革,”也将“创造新的机遇”。