WEEK · 2025-W49

Weekly Digest — 2025-W49

131 unique stories (2025-12-012025-12-07), aggregated across 8 sources.

Hacker News(42)

  1. How to Attend Meetings – Internal guidelines from the New York Times (docs.google.com)
  2. Ghostty compiled to WASM with xterm.js API compatibility (github.com)
  3. High-income job losses are cooling housing demand (jbrec.com)
  4. India orders smartphone makers to preload state-owned cyber safety app (www.reuters.com)
  5. DeepSeek-v3.2: Pushing the frontier of open large language models [pdf] (huggingface.co)
  6. Google unkills JPEG XL? (tonisagrista.com)
  7. Claude 4.5 Opus’ Soul Document (www.lesswrong.com)
  8. The Junior Hiring Crisis (people-work.io)
  9. Anthropic acquires Bun (bun.com)
  10. 100k TPS over a billion rows: the unreasonable effectiveness of SQLite (andersmurphy.com)
  11. Peter Thiel's Apocalyptic Worldview Is a Dangerous Fantasy (jacobin.com)
  12. I designed and printed a custom nose guard to help my dog with DLE (snoutcover.com)

GitHub Trending(24)

  1. sansan0 / TrendRadar

    🎯 告别信息过载,AI 助你看懂新闻资讯热点,简单的舆情监控分析 - 多平台热点聚合+基于 MCP 的AI分析工具。监控35个平台(抖音、知乎、B站、华尔街见闻、财联社等),智能筛选+自动推送+AI对话分析(用自然语言深度挖掘新闻:趋势追踪、情感分析、相似检索等13种工具)。支持企业微信/个人微信/飞书/钉钉/Telegram/邮件/ntfy/bark/slack 推送,30秒网页部署,1分钟手机通知,无需编程。支持Docker部署⭐ 让算法为你服务,用AI理解热点

  2. google / adk-go

    An open-source, code-first Go toolkit for building, evaluating, and deploying sophisticated AI agents with flexibility and control.

  3. TapXWorld / ChinaTextbook

    所有小初高、大学PDF教材。

  4. yeongpin / cursor-free-vip

    [Support 0.49.x](Reset Cursor AI MachineID & Bypass Higher Token Limit) Cursor Ai ,自动重置机器ID , 免费升级使用Pro功能: You've reached your trial request limit. / Too many free trial accounts used on this machine. Please upgrade to pro. We have this limit in place to prevent abuse. Please let us know if you believe this is a mistake.

  5. nvm-sh / nvm

    Node Version Manager - POSIX-compliant bash script to manage multiple active node.js versions

  6. traefik / traefik

    The Cloud Native Application Proxy

  7. basecamp / fizzy

    Kanban as it should be. Not as it has been.

  8. oven-sh / bun

    Incredibly fast JavaScript runtime, bundler, test runner, and package manager – all in one

  9. DayuanJiang / next-ai-draw-io

    A next.js web application that integrates AI capabilities with draw.io diagrams. This app allows you to create, modify, and enhance diagrams through natural language commands and AI-assisted visualization.

  10. openai / codex

    Lightweight coding agent that runs in your terminal

  11. LadybirdBrowser / ladybird

    Truly independent web browser

  12. ashishpatel26 / 500-AI-Agents-Projects

    The 500 AI Agents Projects is a curated collection of AI agent use cases across various industries. It showcases practical applications and provides links to open-source projects for implementation, illustrating how AI agents are transforming sectors such as healthcare, finance, education, retail, and more.

Hugging Face(31)

  1. Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer

    The landscape of high-performance image generation models is currently dominated by proprietary systems, such as Nano Banana Pro and Seedream 4.0. Leading open-source alternatives, including Qwen-Image, Hunyuan-Image-3.0 and FLUX.2, are characterized by massive parameter counts (20B to 80B), making them impractical for inference, and fine-tuning on consumer-grade hardware. To address this gap, we propose Z-Image, an efficient 6B-parameter foundation generative model built upon a Scalable Single-Stream Diffusion Transformer (S3-DiT) architecture that challenges the "scale-at-all-costs" paradigm. By systematically optimizing the entire model lifecycle -- from a curated data infrastructure to a streamlined training curriculum -- we complete the full training workflow in just 314K H800 GPU hours (approx. $630K). Our few-step distillation scheme with reward post-training further yields Z-Image-Turbo, offering both sub-second inference latency on an enterprise-grade H800 GPU and compatibility with consumer-grade hardware (<16GB VRAM). Additionally, our omni-pre-training paradigm also enables efficient training of Z-Image-Edit, an editing model with impressive instruction-following capabilities. Both qualitative and quantitative experiments demonstrate that our model achieves performance comparable to or surpassing that of leading competitors across various dimensions. Most notably, Z-Image exhibits exceptional capabilities in photorealistic image generation and bilingual text rendering, delivering results that rival top-tier commercial models, thereby demonstrating that state-of-the-art results are achievable with significantly reduced computational overhead. We publicly release our code, weights, and online demo to foster the development of accessible, budget-friendly, yet state-of-the-art generative models.

  2. REASONEDIT: Towards Reasoning-Enhanced Image Editing Models

    Recent advances in image editing models have shown remarkable progress. A common architectural design couples a multimodal large language model (MLLM) encoder with a diffusion decoder, as seen in systems such as Step1X-Edit and Qwen-Image-Edit, where the MLLM encodes both the reference image and the instruction but remains frozen during training. In this work, we demonstrate that unlocking the reasoning capabilities of MLLM can further push the boundaries of editing models. Specifically, we explore two reasoning mechanisms, thinking and reflection, which enhance instruction understanding and editing accuracy. Based on that, our proposed framework enables image editing in a thinking-editing-reflection loop: the thinking mechanism leverages the world knowledge of MLLM to interpret abstract instructions, while the reflection reviews editing results, automatically corrects unintended manipulations, and identifies the stopping round. Extensive experiments demonstrate that our reasoning approach achieves significant performance gains, with improvements of ImgEdit (+4.3%), GEdit (+4.7%), and Kris (+8.2%) when initializing our DiT from the Step1X-Edit (ReasonEdit-S), and also outperforms previous open-source methods on both GEdit and Kris when integrated with Qwen-Image-Edit (ReasonEdit-Q).

  3. AnyTalker: Scaling Multi-Person Talking Video Generation with Interactivity Refinement

    Recently, multi-person video generation has started to gain prominence. While a few preliminary works have explored audio-driven multi-person talking video generation, they often face challenges due to the high costs of diverse multi-person data collection and the difficulty of driving multiple identities with coherent interactivity. To address these challenges, we propose AnyTalker, a multi-person generation framework that features an extensible multi-stream processing architecture. Specifically, we extend Diffusion Transformer's attention block with a novel identity-aware attention mechanism that iteratively processes identity-audio pairs, allowing arbitrary scaling of drivable identities. Besides, training multi-person generative models demands massive multi-person data. Our proposed training pipeline depends solely on single-person videos to learn multi-person speaking patterns and refines interactivity with only a few real multi-person clips. Furthermore, we contribute a targeted metric and dataset designed to evaluate the naturalness and interactivity of the generated multi-person videos. Extensive experiments demonstrate that AnyTalker achieves remarkable lip synchronization, visual quality, and natural interactivity, striking a favorable balance between data costs and identity scalability.

  4. Vision Bridge Transformer at Scale

    We introduce Vision Bridge Transformer (ViBT), a large-scale instantiation of Brownian Bridge Models designed for conditional generation. Unlike traditional diffusion models that transform noise into data, Bridge Models directly model the trajectory between inputs and outputs, creating an efficient data-to-data translation paradigm. By scaling these models to 20B and 1.3B parameters, we demonstrate their effectiveness for image and video translation tasks. To support this scale, we adopt a Transformer architecture and propose a variance-stabilized velocity-matching objective for robust training. Together, these advances highlight the power of scaling Bridge Models for instruction-based image editing and complex video translation.

  5. Architecture Decoupling Is Not All You Need For Unified Multimodal Model

    Unified multimodal models for image generation and understanding represent a significant step toward AGI and have attracted widespread attention from researchers. The main challenge of this task lies in the difficulty in establishing an optimal training paradigm due to inherent conflicting targets in understanding and generation tasks. To alleviate these conflicts and pursue higher performance, many researchers adopt varying degrees of model decoupling (e.g., Double image encoders, MOE/MOT architecture, or frozen MLLM). However, excessive model decoupling can lead to the loss of interleave generation ability, undermining the original intent of unified models. In this work, we aim to explore how to mitigate task conflicts without resorting to model decoupling. Firstly, we analyze why decoupling alleviates conflicts by studying the cross-modal attention behavior of models. We observe that model decoupling essentially drives models toward task-specific multimodal interaction patterns, as seen in Qwen-VL and HunyuanImage, and that the more thorough the decoupling, the more consistent the behavior becomes. Motivated by this observation, we propose Attention Interaction Alignment (AIA) loss, which explicitly learns Task-Specific multimodal interaction patterns during training. To demonstrate the generalizability of our AIA loss, we apply it to Emu3 and Janus-Pro during SFT and post-training stage respectively. Without bells and whistles, AIA not only refines cross-modal attention patterns, but also boosts both generation and understanding performance.

  6. DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning

    Large language models have made significant progress in mathematical reasoning, which serves as an important testbed for AI and could impact scientific research if further advanced. By scaling reasoning with reinforcement learning that rewards correct final answers, LLMs have improved from poor performance to saturating quantitative reasoning competitions like AIME and HMMT in one year. However, this approach faces fundamental limitations. Pursuing higher final answer accuracy doesn't address a key issue: correct answers don't guarantee correct reasoning. Moreover, many mathematical tasks like theorem proving require rigorous step-by-step derivation rather than numerical answers, making final answer rewards inapplicable. To push the limits of deep reasoning, we believe it is necessary to verify the comprehensiveness and rigor of mathematical reasoning. Self-verification is particularly important for scaling test-time compute, especially for open problems without known solutions. Towards self-verifiable mathematical reasoning, we investigate how to train an accurate and faithful LLM-based verifier for theorem proving. We then train a proof generator using the verifier as the reward model, and incentivize the generator to identify and resolve as many issues as possible in their own proofs before finalizing them. To maintain the generation-verification gap as the generator becomes stronger, we propose to scale verification compute to automatically label new hard-to-verify proofs, creating training data to further improve the verifier. Our resulting model, DeepSeekMath-V2, demonstrates strong theorem-proving capabilities, achieving gold-level scores on IMO 2025 and CMO 2024 and a near-perfect 118/120 on Putnam 2024 with scaled test-time compute.

  7. From Code Foundation Models to Agents and Applications: A Practical Guide to Code Intelligence

    Large language models (LLMs) have fundamentally transformed automated software development by enabling direct translation of natural language descriptions into functional code, driving commercial adoption through tools like Github Copilot (Microsoft), Cursor (Anysphere), Trae (ByteDance), and Claude Code (Anthropic). While the field has evolved dramatically from rule-based systems to Transformer-based architectures, achieving performance improvements from single-digit to over 95\% success rates on benchmarks like HumanEval. In this work, we provide a comprehensive synthesis and practical guide (a series of analytic and probing experiments) about code LLMs, systematically examining the complete model life cycle from data curation to post-training through advanced prompting paradigms, code pre-training, supervised fine-tuning, reinforcement learning, and autonomous coding agents. We analyze the code capability of the general LLMs (GPT-4, Claude, LLaMA) and code-specialized LLMs (StarCoder, Code LLaMA, DeepSeek-Coder, and QwenCoder), critically examining the techniques, design decisions, and trade-offs. Further, we articulate the research-practice gap between academic research (e.g., benchmarks and tasks) and real-world deployment (e.g., software-related code tasks), including code correctness, security, contextual awareness of large codebases, and integration with development workflows, and map promising research directions to practical needs. Last, we conduct a series of experiments to provide a comprehensive analysis of code pre-training, supervised fine-tuning, and reinforcement learning, covering scaling law, framework selection, hyperparameter sensitivity, model architectures, and dataset comparisons.

  8. LongVT: Incentivizing "Thinking with Long Videos" via Native Tool Calling

    Large multimodal models (LMMs) have shown great potential for video reasoning with textual Chain-of-Thought. However, they remain vulnerable to hallucinations, especially when processing long-form videos where evidence is sparse and temporally dispersed. Inspired by how humans comprehend long videos - by first skimming globally and then examining relevant clips for details - we introduce LongVT, an end-to-end agentic framework that enables "Thinking with Long Videos" via interleaved Multimodal Chain-of-Tool-Thought. Specifically, we exploit LMMs' inherent temporal grounding ability as a native video cropping tool to zoom in on a specific video clip and resample finer-grained video frames. This global-to-local reasoning loop continues until answers are grounded in retrieved visual evidence. Given the scarcity of fine-grained question-answering (QA) data for the long video reasoning task, we curate and will release a data suite named VideoSIAH to facilitate both training and evaluation. Specifically, our training dataset consists of 247.9K samples for tool-integrated cold-start supervised fine-tuning, 1.6K samples for agentic reinforcement learning, and 15.4K samples for agentic reinforcement fine-tuning, respectively. Our evaluation benchmark consists of 1,280 QA pairs that are carefully curated through a semi-automatic data pipeline with human-in-the-loop validation. With a meticulously designed three-stage training strategy and extensive empirical validation, LongVT consistently outperforms existing strong baselines across four challenging long-video understanding and reasoning benchmarks. Our codes, data, and model checkpoints are publicly available at https://github.com/EvolvingLMMs-Lab/LongVT .

  9. Envision: Benchmarking Unified Understanding & Generation for Causal World Process Insights

    Current multimodal models aim to transcend the limitations of single-modality representations by unifying understanding and generation, often using text-to-image (T2I) tasks to calibrate semantic consistency. However, their reliance on static, single-image generation in training and evaluation leads to overfitting to static pattern matching and semantic fusion, while fundamentally hindering their ability to model dynamic processes that unfold over time. To address these constraints, we propose Envision-a causal event progression benchmark for chained text-to-multi-image generation. Grounded in world knowledge and structured by spatiotemporal causality, it reorganizes existing evaluation dimensions and includes 1,000 four-stage prompts spanning six scientific and humanities domains. To transition evaluation from single images to sequential frames and assess whether models truly internalize world knowledge while adhering to causal-temporal constraints, we introduce Envision-Score, a holistic metric integrating multi-dimensional consistency, physicality, and aesthetics. Comprehensive evaluation of 15 models (10 specialized T2I models, 5 unified models) uncovers: specialized T2I models demonstrate proficiency in aesthetic rendering yet lack intrinsic world knowledge. Unified multimodal models bridge this gap, consistently outperforming specialized counterparts in causal narrative coherence. However, even these unified architectures remain subordinate to closed-source models and struggle to overcome the core challenge of spatiotemporal consistency. This demonstrates that a focus on causally-isolated single images impedes multi-frame reasoning and generation, promoting static pattern matching over dynamic world modeling-ultimately limiting world knowledge internalization, generation.

  10. Stabilizing Reinforcement Learning with LLMs: Formulation and Practices

    This paper proposes a novel formulation for reinforcement learning (RL) with large language models, explaining why and under what conditions the true sequence-level reward can be optimized via a surrogate token-level objective in policy gradient methods such as REINFORCE. Specifically, through a first-order approximation, we show that this surrogate becomes increasingly valid only when both the training-inference discrepancy and policy staleness are minimized. This insight provides a principled explanation for the crucial role of several widely adopted techniques in stabilizing RL training, including importance sampling correction, clipping, and particularly Routing Replay for Mixture-of-Experts (MoE) models. Through extensive experiments with a 30B MoE model totaling hundreds of thousands of GPU hours, we show that for on-policy training, the basic policy gradient algorithm with importance sampling correction achieves the highest training stability. When off-policy updates are introduced to accelerate convergence, combining clipping and Routing Replay becomes essential to mitigate the instability caused by policy staleness. Notably, once training is stabilized, prolonged optimization consistently yields comparable final performance regardless of cold-start initialization. We hope that the shared insights and the developed recipes for stable RL training will facilitate future research.

  11. How Far Are We from Genuinely Useful Deep Research Agents?

    Deep Research Agents (DRAs) aim to automatically produce analyst-level reports through iterative information retrieval and synthesis. However, most existing DRAs were validated on question-answering benchmarks, while research on generating comprehensive reports remains overlooked. Worse, current benchmarks for report synthesis suffer from task complexity and subjective metrics -- this fails to reflect user demands and limits the practical utility of generated reports. To address these gaps, we present Fine-grained DEepResearch bench (FINDER), an enhanced benchmark consisting of 100 human-curated research tasks with 419 structured checklist items that standardize report structure, analytical depth, and factual grounding. Based on approximately 1,000 reports produced by mainstream DRAs, we further propose Deep rEsearch Failure Taxonomy (DEFT), the first failure taxonomy for deep research agents. DEFT contains 14 fine-grained failure modes across reasoning, retrieval, and generation, and is built upon grounded theory with human-LLM co-annotating and inter-annotator reliability validation. Our experimental findings reveal that current DRAs struggle not with task comprehension but with evidence integration, verification, and reasoning-resilient planning.

  12. What about gravity in video generation? Post-Training Newton's Laws with Verifiable Rewards

    Recent video diffusion models can synthesize visually compelling clips, yet often violate basic physical laws-objects float, accelerations drift, and collisions behave inconsistently-revealing a persistent gap between visual realism and physical realism. We propose NewtonRewards, the first physics-grounded post-training framework for video generation based on verifiable rewards. Instead of relying on human or VLM feedback, NewtonRewards extracts measurable proxies from generated videos using frozen utility models: optical flow serves as a proxy for velocity, while high-level appearance features serve as a proxy for mass. These proxies enable explicit enforcement of Newtonian structure through two complementary rewards: a Newtonian kinematic constraint enforcing constant-acceleration dynamics, and a mass conservation reward preventing trivial, degenerate solutions. We evaluate NewtonRewards on five Newtonian Motion Primitives (free fall, horizontal/parabolic throw, and ramp sliding down/up) using our newly constructed large-scale benchmark, NewtonBench-60K. Across all primitives in visual and physics metrics, NewtonRewards consistently improves physical plausibility, motion smoothness, and temporal coherence over prior post-training methods. It further maintains strong performance under out-of-distribution shifts in height, speed, and friction. Our results show that physics-grounded verifiable rewards offer a scalable path toward physics-aware video generation.

Solidot(34)

  1. 树莓派因为内存价格飙升而涨价

    树莓派宣布因为近期内存价格飙升而对部分型号的 Raspberry Pi 4 和 5 产品涨价,同时宣布推出一款 1GB 版本的 Raspberry Pi 5,售价 45 美元。Raspberry Pi 4 和 5 价格上涨最高 25 美元,最低 5 美元。4GB 版本的 Raspberry Pi 4 从 55 美元涨到 60 美元,16GB 版本的 Raspberry Pi 5 从 120 美元涨到 145 美元。树莓派表示近期的内存价格飙升是 AI 热推动的,一旦情况缓解它将会调低价格。

  2. 吸血章鱼揭示章鱼的起源

    吸血章鱼是一种居住在深海的头足类,是八腕总目的一种,其祖先在侏罗纪时期为了躲避蛇颈龙目的猎食而移居深海,亿年来其形态不曾改变,被称为是活化石。日本研究团队在骏河湾意外捕捉到一只吸血章鱼,对其进行测序后发现其碱基对超过 110 亿,是已知章鱼类动物最大基因组的两倍多。吸血章鱼虽然名字中有章鱼,但它既不是章鱼也不是鱿鱼,更不是吸血鬼,它是一种古老谱系中最后也是唯一的幸存者,该谱系中其它成员都消失了。它的历史可追溯到 1.83 亿年前,保留了祖先的诸多特征,同时演化出适应深海黑暗环境、以腐肉为生的生存方式。其基因组规模比鱿鱼和章鱼都大得多,其中 62% 由重复序列组成。吸血章鱼属于八腕总目,但保留了十腕总目的部分染色体结构。研究人员表示吸血章鱼让我们能直接观察头足类动物演化的最早阶段。

  3. 韩国电商巨头逾 3000 万用户账户泄漏

    韩国电商巨头酷澎发生了 3000 余万个用户账号信息遭泄事件。遭泄的个人信息包含用户姓名、电子邮箱、电话号码、地址,甚至包含部分订购记录。根据韩国 《个人信息保护法》,若企业违反相关法律,可以被处以最多相当于销售额 3% 的罚款。酷澎今年前三季度累计销售额为 36.3 万亿韩元。若从中减去与个人信息泄露案关联度不高的业务部门业绩等,销售额为 31 万亿韩元。若再将其折算为年销售额,罚款或达 1.2 万亿韩元。根据酷澎递交给警方的报告,用户信息泄露非因遭黑客攻击,而由公司中国籍员工外泄所致。该员工早已离职并离境。

  4. SmartTube 官方 APK 文件被植入恶意程序

    SmartTube 开发者上周宣布数字签名泄漏,他发布了使用新签名的新版本应用,督促用户切换到新版本。SmartTube 是 Android TV 和 Fire TV 设备上 YouTube 应用的流行替代。开发者透露,他用于构建官方 APK 文件的计算机遭到入侵,导致部分 APK 版本植入了恶意程序。暂时不清楚哪个版本的 APK 最早包含了恶意程序。APKMirror 上的 SmartTube v30.43 和 30.47 都被标记为感染恶意程序。开发者表示,所有旧版本 SmartTube 都已经从项目的 GitHub 库中移除,感染恶意程序的计算机也进行了处理,旧数字签名被弃用。SmartTube v30.56 是使用新签名在干净计算机上构建的首个版本。

  5. 日本多家新闻社要求 Perplexity 停止使用其新闻稿

    日本共同社、每日新闻社与产经新闻社周一向 AI 搜索公司 Perplexity 发送抗议书,以该公司擅自使用新闻机构发布的新闻稿、侵犯著作权为由,要求立即停止使用。在这之前,读卖新闻、朝日新闻社和日本经济新闻社也都提出了类似的要求和诉讼。共同社在抗议书中指出,确认到自 2024 年 8 月起的约 1 年里,Perplexity 合计数十万次访问刊登共同社与加盟报社稿件的新闻网站“47NEWS”。抗议书强调,Perplexity 未获许可即收集和复制新闻内容,并用于生成回答,侵犯了著作权。抗议书还指出,Perplexity 回答所显示的参考来源是共同社新闻稿,但给出的回答却是与稿件内容不同的虚假信息,损害了共同社新闻产品的信誉和品牌价值。

  6. 因发射事故俄罗斯失去了唯一一个载人飞船发射场

    11 月 27 日俄罗斯在拜科努尔航天发射场成功发射了联盟号 MS-28 载人飞船。但 31/6 发射台下的移动维护舱却因为火箭尾焰从高空倒扣坠落而严重受损,在修复前发射台将无法使用,至于修复时间专家估计从几个月到三年。拜科努尔航天发射场是目前俄罗斯唯一能向国际空间站发射联盟号载人飞船和进步号无人货船的发射场地,而无人货船 MS-33 原计划于 12 月 21 日发射。俄罗斯还有其它发射场,但它们或者位于不适合的维度如 Plesetsk 发射场,或者没有获得载人飞行认证如东方发射场,或者已经退役移交给博物馆如拜科努尔的加加林发射台。

  7. 2025 年牛津年度单词是 Rage bait

    牛津大学出版社的 2025 年年度单词是 Rage bait。Rage bait 意思是愤怒诱饵,它和去年的年度单词 brain rot(脑腐)一起提醒我们,在算法时代,情绪已成为最被操弄的资源。Rage bait 专指那些刻意让人愤怒、挫折或感到冒犯,以拉高点击率或社群互动的网络内容,例如在网络上故意激怒你,只为了让你多按个怒的表情符号、骂两句、再分享出去,让算法把愤怒推得更远。根据牛津语料库资料,rage bait 在过去 12 个月内的使用频率增加三倍,成为媒体与社群平台经常提到的词。

  8. 华为过去几年申请的 GPU 专利超过英伟达

    华为的 GPU 相关专利申请量正在增加。截至 2023 年的 5 年里,申请数量增加到了原来的 10 倍,超过了美国英伟达和英特尔的申请量。这反映出华为正在大力开发 AI 相关技术。从包含 GPU 这一关键词的专利来看,最近几年三星电子与华为的申请量激增。2023 年华为的申请量为 3091 项,增加到了 2018 年的约 10 倍。2023 年的申请量相当于英特尔的 3 倍、英伟达的 5 倍。

  9. 新加坡禁止中学生在校期间使用智能手机

    新加坡教育部宣布,明年 1 月起,所有中学生在学校时,包括上课、课间休息,以及课后进行课程辅助活动、增益课或补课等,都不得使用智能手机和手表。学生在校时须把智能手机和手表放入储物柜或书包等指定存放空间;若有必要,学校会允许学生使用智能手机。新教育部称,此举旨在鼓励学生培养良好数码习惯,下课后与同学进行有意义的互动交流,以及培养健康生活方式。新加坡之前已经不允许小学生在校使用智能手机或手表,他们在学校必须把这些配备放在书包或指定存放空间,包括课间休息,及下课后进行学习项目的时候。

  10. Steam 用户中 Linux 比例达到 3.20%

    掌机 Steam Deck 的流行以及基于 Arch Linux 的发行版 SteamOS 的成功推动 Linux 用户比例达到 3.20%。根据 Valve 公布的 2025 年 11 月 Steam 硬件和软件调查,玩家运行的操作系统中 Linux 比例达到 3.20%(+0.15%),Windows 占 94.79%(-0.05%),OSX 占 2.02%(-0.09%),其中在 Windows 10 停止支持后 Windows 11 比例达到了 65.59%(+2.02%),Windows 10 降至三成以内占 29.06%(-2.08%)。在 Windows 平台,英特尔 CPU 占 57.30%(-0.52%),AMD 占 42.61%(+0.52%)。对于用户使用的语言,简体中文占 24.93%(+0.92%),英文占 37.37%(-0.59%)。

  11. Let’s Encrypt 到 2028 年将证书有效期缩短至 45 天

    Let’s Encrypt 宣布到 2028 年将证书有效期从现在的 90 天缩短至 45 天。此举是为了遵守今年早些时候 Certification Authority Browser Forum (CA/Browser Forum)通过的缩短证书有效期决议。决议要求到 2026 年 3 月 15 日 TLS 证书最长有效期将缩短至 200 天;到 2027 年 3 月 15 日 TLS 证书最长有效期将缩短至 100 天;2029 年 3 月 15 日 TLS 证书最长有效期将缩短至 47 天。Let’s Encrypt 还将缩短验证域名控制权后允许为该域名签发证书的时间间隔,从目前的 30 天缩短至 7 小时。为减少对用户的影响,新的变更将分阶段实施:2026 年 5 月 13 日可选配置有效期 45 天,2027 年 2 月 10 日默认配置有效期 64 天,2028 年 2 月 16 日有效期进一步缩短为 45 天。

  12. 印度要求智能手机预装政府的网络安全应用

    印度电信部要求智能手机制造商在手机上预装用户无法禁用的政府网络安全应用 Sanchar Saathi。该命令是于 11 月 28 日下达给手机制造商的,要求在 90 天内完成。该命令没有公开发布,而是通过私下下达给智能手机制造商。苹果、三星、vivo、OPPO 和小米等都受到新命令的约束。印度电信部还要求对于现有手机,制造商需要通过软件更新将该应用推送给用户。印度是全球最大的手机市场之一,有逾 12 亿用户。Sanchar Saathi 于 1 月推出,政府声称该应用帮助找回了逾 70 万部丢失的手机。印度此举可能会激怒苹果公司和隐私倡导者。苹果公司通常会拒绝此类请求,Counterpoint Research 研究总监 Tarun Pathak 称,苹果可能会寻求折衷方案:与其强制预装,不如协商要求提供一个选项,引导用户安装该应用。