DIGEST · 2026-01-28

OrangeBot.AI Digest — 2026-01-28

56 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. Somebody used spoofed ADSB signals to raster the meme of JD Vance (alecmuffett.com)
  2. Apple to Soon Take Up to 30% Cut from All Patreon Creators in iOS App (www.macrumors.com)
  3. That's not how email works (danq.me)
  4. Mousefood – Build embedded terminal UIs for microcontrollers (github.com)
  5. Oban, the job processing framework from Elixir, has come to Python (www.dimamik.com)
  6. Amazon cuts 16k jobs (www.reuters.com)
  7. Microsoft forced me to switch to Linux (www.himthe.dev)
  8. Airfoil (2024) (ciechanow.ski)
  9. ICE and Palantir: US agents using health data to hunt illegal immigrants (www.bmj.com)
  10. Show HN: The HN Arcade (andrewgy8.github.io)
  11. Pandas 3.0 (pandas.pydata.org)
  12. Rust at Scale: An Added Layer of Security for WhatsApp (engineering.fb.com)
  13. ASML staffing changes could result in a net reduction of around 1700 positions (www.asml.com)
  14. Make.ts (matklad.github.io)
  15. SVG Path Editor (yqnn.github.io)

GitHub Trending(11)

  1. badlogic / pi-mono

    AI agent toolkit: coding agent CLI, unified LLM API, TUI & web UI libraries, Slack bot, vLLM pods

  2. hashicorp / vault

    A tool for secrets management, encryption as a service, and privileged access management

  3. asgeirtj / system_prompts_leaks

    Collection of extracted System Prompts from popular chatbots like ChatGPT, Claude & Gemini

  4. NevaMind-AI / memU

    Memory for 24/7 proactive agents like moltbot (clawdbot).

  5. MoonshotAI / kimi-cli

    Kimi Code CLI is your next CLI agent.

  6. kubernetes / ingress-nginx

    Ingress NGINX Controller for Kubernetes

  7. protocolbuffers / protobuf

    Protocol Buffers - Google's data interchange format

  8. lobehub / lobehub

    The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

  9. ran-j / PS2Recomp

    Playstation 2 Static Recompiler & Runtime Tool to make native PC ports

  10. bambulab / BambuStudio

    PC Software for BambuLab and other 3D printers

  11. GetStream / Vision-Agents

    Open Vision Agents by Stream. Build Vision Agents quickly with any model or video provider. Uses Stream's edge network for ultra-low latency.

Hugging Face(15)

  1. AgentDoG: A Diagnostic Guardrail Framework for AI Agent Safety and Security

    The rise of AI agents introduces complex safety and security challenges arising from autonomous tool use and environmental interactions. Current guardrail models lack agentic risk awareness and transparency in risk diagnosis. To introduce an agentic guardrail that covers complex and numerous risky behaviors, we first propose a unified three-dimensional taxonomy that orthogonally categorizes agentic risks by their source (where), failure mode (how), and consequence (what). Guided by this structured and hierarchical taxonomy, we introduce a new fine-grained agentic safety benchmark (ATBench) and a Diagnostic Guardrail framework for agent safety and security (AgentDoG). AgentDoG provides fine-grained and contextual monitoring across agent trajectories. More Crucially, AgentDoG can diagnose the root causes of unsafe actions and seemingly safe but unreasonable actions, offering provenance and transparency beyond binary labels to facilitate effective agent alignment. AgentDoG variants are available in three sizes (4B, 7B, and 8B parameters) across Qwen and Llama model families. Extensive experimental results demonstrate that AgentDoG achieves state-of-the-art performance in agentic safety moderation in diverse and complex interactive scenarios. All models and datasets are openly released.

  2. AdaReasoner: Dynamic Tool Orchestration for Iterative Visual Reasoning

    When humans face problems beyond their immediate capabilities, they rely on tools, providing a promising paradigm for improving visual reasoning in multimodal large language models (MLLMs). Effective reasoning, therefore, hinges on knowing which tools to use, when to invoke them, and how to compose them over multiple steps, even when faced with new tools or new tasks. We introduce AdaReasoner, a family of multimodal models that learn tool use as a general reasoning skill rather than as tool-specific or explicitly supervised behavior. AdaReasoner is enabled by (i) a scalable data curation pipeline exposing models to long-horizon, multi-step tool interactions; (ii) Tool-GRPO, a reinforcement learning algorithm that optimizes tool selection and sequencing based on end-task success; and (iii) an adaptive learning mechanism that dynamically regulates tool usage. Together, these components allow models to infer tool utility from task context and intermediate outcomes, enabling coordination of multiple tools and generalization to unseen tools. Empirically, AdaReasoner exhibits strong tool-adaptive and generalization behaviors: it autonomously adopts beneficial tools, suppresses irrelevant ones, and adjusts tool usage frequency based on task demands, despite never being explicitly trained to do so. These capabilities translate into state-of-the-art performance across challenging benchmarks, improving the 7B base model by +24.9\% on average and surpassing strong proprietary systems such as GPT-5 on multiple tasks, including VSP and Jigsaw.

  3. A Pragmatic VLA Foundation Model

    Offering great potential in robotic manipulation, a capable Vision-Language-Action (VLA) foundation model is expected to faithfully generalize across tasks and platforms while ensuring cost efficiency (e.g., data and GPU hours required for adaptation). To this end, we develop LingBot-VLA with around 20,000 hours of real-world data from 9 popular dual-arm robot configurations. Through a systematic assessment on 3 robotic platforms, each completing 100 tasks with 130 post-training episodes per task, our model achieves clear superiority over competitors, showcasing its strong performance and broad generalizability. We have also built an efficient codebase, which delivers a throughput of 261 samples per second per GPU with an 8-GPU training setup, representing a 1.5~2.8times (depending on the relied VLM base model) speedup over existing VLA-oriented codebases. The above features ensure that our model is well-suited for real-world deployment. To advance the field of robot learning, we provide open access to the code, base model, and benchmark data, with a focus on enabling more challenging tasks and promoting sound evaluation standards.

  4. AVMeme Exam: A Multimodal Multilingual Multicultural Benchmark for LLMs' Contextual and Cultural Knowledge and Thinking

    Internet audio-visual clips convey meaning through time-varying sound and motion, which extend beyond what text alone can represent. To examine whether AI models can understand such signals in human cultural contexts, we introduce AVMeme Exam, a human-curated benchmark of over one thousand iconic Internet sounds and videos spanning speech, songs, music, and sound effects. Each meme is paired with a unique Q&A assessing levels of understanding from surface content to context and emotion to usage and world knowledge, along with metadata such as original year, transcript, summary, and sensitivity. We systematically evaluate state-of-the-art multimodal large language models (MLLMs) alongside human participants using this benchmark. Our results reveal a consistent limitation: current models perform poorly on textless music and sound effects, and struggle to think in context and in culture compared to surface content. These findings highlight a key gap in human-aligned multimodal intelligence and call for models that can perceive contextually and culturally beyond the surface of what they hear and see. Project page: avmemeexam.github.io/public

  5. Visual Generation Unlocks Human-Like Reasoning through Multimodal World Models

    Humans construct internal world models and reason by manipulating the concepts within these models. Recent advances in AI, particularly chain-of-thought (CoT) reasoning, approximate such human cognitive abilities, where world models are believed to be embedded within large language models. Expert-level performance in formal and abstract domains such as mathematics and programming has been achieved in current systems by relying predominantly on verbal reasoning. However, they still lag far behind humans in domains like physical and spatial intelligence, which require richer representations and prior knowledge. The emergence of unified multimodal models (UMMs) capable of both verbal and visual generation has therefore sparked interest in more human-like reasoning grounded in complementary multimodal pathways, though their benefits remain unclear. From a world-model perspective, this paper presents the first principled study of when and how visual generation benefits reasoning. Our key position is the visual superiority hypothesis: for certain tasks--particularly those grounded in the physical world--visual generation more naturally serves as world models, whereas purely verbal world models encounter bottlenecks arising from representational limitations or insufficient prior knowledge. Theoretically, we formalize internal world modeling as a core component of CoT reasoning and analyze distinctions among different forms of world models. Empirically, we identify tasks that necessitate interleaved visual-verbal CoT reasoning, constructing a new evaluation suite, VisWorld-Eval. Controlled experiments on a state-of-the-art UMM show that interleaved CoT significantly outperforms purely verbal CoT on tasks that favor visual world modeling, but offers no clear advantage otherwise. Together, this work clarifies the potential of multimodal world modeling for more powerful, human-like multimodal AI.

  6. World Craft: Agentic Framework to Create Visualizable Worlds via Text

    Large Language Models (LLMs) motivate generative agent simulation (e.g., AI Town) to create a ``dynamic world'', holding immense value across entertainment and research. However, for non-experts, especially those without programming skills, it isn't easy to customize a visualizable environment by themselves. In this paper, we introduce World Craft, an agentic world creation framework to create an executable and visualizable AI Town via user textual descriptions. It consists of two main modules, World Scaffold and World Guild. World Scaffold is a structured and concise standardization to develop interactive game scenes, serving as an efficient scaffolding for LLMs to customize an executable AI Town-like environment. World Guild is a multi-agent framework to progressively analyze users' intents from rough descriptions, and synthesizes required structured contents (\eg environment layout and assets) for World Scaffold . Moreover, we construct a high-quality error-correction dataset via reverse engineering to enhance spatial knowledge and improve the stability and controllability of layout generation, while reporting multi-dimensional evaluation metrics for further analysis. Extensive experiments demonstrate that our framework significantly outperforms existing commercial code agents (Cursor and Antigravity) and LLMs (Qwen3 and Gemini-3-Pro). in scene construction and narrative intent conveyance, providing a scalable solution for the democratization of environment creation.

  7. FABLE: Forest-Based Adaptive Bi-Path LLM-Enhanced Retrieval for Multi-Document Reasoning

    The rapid expansion of long-context Large Language Models (LLMs) has reignited debate on whether Retrieval-Augmented Generation (RAG) remains necessary. However, empirical evidence reveals persistent limitations of long-context inference, including the lost-in-the-middle phenomenon, high computational cost, and poor scalability for multi-document reasoning. Conversely, traditional RAG systems, while efficient, are constrained by flat chunk-level retrieval that introduces semantic noise and fails to support structured cross-document synthesis. We present FABLE, a Forest-based Adaptive Bi-path LLM-Enhanced retrieval framework that integrates LLMs into both knowledge organization and retrieval. FABLE constructs LLM-enhanced hierarchical forest indexes with multi-granularity semantic structures, then employs a bi-path strategy combining LLM-guided hierarchical traversal with structure-aware propagation for fine-grained evidence acquisition, with explicit budget control for adaptive efficiency trade-offs. Extensive experiments demonstrate that FABLE consistently outperforms SOTA RAG methods and achieves comparable accuracy to full-context LLM inference with up to 94\% token reduction, showing that long-context LLMs amplify rather than fully replace the need for structured retrieval.

  8. TriPlay-RL: Tri-Role Self-Play Reinforcement Learning for LLM Safety Alignment

    In recent years, safety risks associated with large language models have become increasingly prominent, highlighting the urgent need to mitigate the generation of toxic and harmful content. The mainstream paradigm for LLM safety alignment typically adopts a collaborative framework involving three roles: an attacker for adversarial prompt generation, a defender for safety defense, and an evaluator for response assessment. In this paper, we propose a closed-loop reinforcement learning framework called TriPlay-RL that enables iterative and co-improving collaboration among three roles with near-zero manual annotation. Experimental results show that the attacker preserves high output diversity while achieving a 20%-50% improvement in adversarial effectiveness; the defender attains 10%-30% gains in safety performance without degrading general reasoning capability; and the evaluator continuously refines its fine-grained judgment ability through iterations, accurately distinguishing unsafe responses, simple refusals, and useful guidance. Overall, our framework establishes an efficient and scalable paradigm for LLM safety alignment, enabling continuous co-evolution within a unified learning loop.

  9. Post-LayerNorm Is Back: Stable, ExpressivE, and Deep

    Large language model (LLM) scaling is hitting a wall. Widening models yields diminishing returns, and extending context length does not improve fundamental expressivity. In contrast, depth scaling offers theoretically superior expressivity, yet current Transformer architectures struggle to train reliably at extreme depths. We revisit the Post-LayerNorm (Post-LN) formulation, whose instability at scale caused its replacement by Pre-LN in modern LLMs. We show that the central failure mode of Post-LN arises from the ResNet-style residual pathway, which introduces gradient vanishing in deep networks. We present Keel, a Post-LN Transformer that replaces this residual path with a Highway-style connection. This modification preserves the gradient flow through the residual branch, preventing signal vanishing from the top layers to the bottom. Unlike prior methods, Keel enables stable training at extreme depths without requiring specialized initialization or complex optimization tricks. Keel trains robustly at depths exceeding 1000 layers and consistently improves perplexity and depth-scaling characteristics over Pre-LN. These findings indicate that Post-LN, when paired with a Highway-style connection, provides a simple and effective foundation for building deeply scalable LLMs, opening the possibility for future infinite-depth architectures.

  10. Selective Steering: Norm-Preserving Control Through Discriminative Layer Selection

    Despite significant progress in alignment, large language models (LLMs) remain vulnerable to adversarial attacks that elicit harmful behaviors. Activation steering techniques offer a promising inference-time intervention approach, but existing methods suffer from critical limitations: activation addition requires careful coefficient tuning and is sensitive to layer-specific norm variations, while directional ablation provides only binary control. Recent work on Angular Steering introduces continuous control via rotation in a 2D subspace, but its practical implementation violates norm preservation, causing distribution shift and generation collapse, particularly in models below 7B parameters. We propose Selective Steering, which addresses these limitations through two key innovations: (1) a mathematically rigorous norm-preserving rotation formulation that maintains activation distribution integrity, and (2) discriminative layer selection that applies steering only where feature representations exhibit opposite-signed class alignment. Experiments across nine models demonstrate that Selective Steering achieves 5.5x higher attack success rates than prior methods while maintaining zero perplexity violations and approximately 100\% capability retention on standard benchmarks. Our approach provides a principled, efficient framework for controllable and stable LLM behavior modification. Code: https://github.com/knoveleng/steering

  11. Towards Pixel-Level VLM Perception via Simple Points Prediction

    We present SimpleSeg, a strikingly simple yet highly effective approach to endow Multimodal Large Language Models (MLLMs) with native pixel-level perception. Our method reframes segmentation as a simple sequence generation problem: the model directly predicts sequences of points (textual coordinates) delineating object boundaries, entirely within its language space. To achieve high fidelity, we introduce a two-stage SFtoRL training pipeline, where Reinforcement Learning with an IoU-based reward refines the point sequences to accurately match ground-truth contours. We find that the standard MLLM architecture possesses a strong, inherent capacity for low-level perception that can be unlocked without any specialized architecture. On segmentation benchmarks, SimpleSeg achieves performance that is comparable to, and often surpasses, methods relying on complex, task-specific designs. This work lays out that precise spatial understanding can emerge from simple point prediction, challenging the prevailing need for auxiliary components and paving the way for more unified and capable VLMs. Homepage: https://simpleseg.github.io/

  12. Revisiting Parameter Server in LLM Post-Training

    Modern data parallel (DP) training favors collective communication over parameter servers (PS) for its simplicity and efficiency under balanced workloads. However, the balanced workload assumption no longer holds in large language model (LLM) post-training due to the high variance in sequence lengths. Under imbalanced workloads, collective communication creates synchronization barriers, leading to under-utilization of devices with smaller workloads. This change in training dynamics calls for a revisit of the PS paradigm for its robustness to such imbalance. We propose On-Demand Communication (ODC), which adapts PS into Fully Sharded Data Parallel (FSDP) by replacing collective all-gather and reduce-scatter with direct point-to-point communication. Compared to FSDP, ODC reduces the synchronization barrier from once per layer to once per minibatch and decouples the workload on each device so that faster workers are not stalled. It also enables simpler and more effective load balancing at the minibatch level. Across diverse LLM post-training tasks, ODC consistently improves device utilization and training throughput, achieving up to a 36\% speedup over standard FSDP. These results demonstrate that ODC is a superior fit for the prevalent imbalanced workloads in LLM post-training. Our implementation of ODC and integration with FSDP is open-sourced at https://github.com/sail-sg/odc.

  13. HalluCitation Matters: Revealing the Impact of Hallucinated References with 300 Hallucinated Papers in ACL Conferences

    Recently, we have often observed hallucinated citations or references that do not correspond to any existing work in papers under review, preprints, or published papers. Such hallucinated citations pose a serious concern to scientific reliability. When they appear in accepted papers, they may also negatively affect the credibility of conferences. In this study, we refer to hallucinated citations as "HalluCitation" and systematically investigate their prevalence and impact. We analyze all papers published at ACL, NAACL, and EMNLP in 2024 and 2025, including main conference, Findings, and workshop papers. Our analysis reveals that nearly 300 papers contain at least one HalluCitation, most of which were published in 2025. Notably, half of these papers were identified at EMNLP 2025, the most recent conference, indicating that this issue is rapidly increasing. Moreover, more than 100 such papers were accepted as main conference and Findings papers at EMNLP 2025, affecting the credibility.

  14. DeFM: Learning Foundation Representations from Depth for Robotics

    Depth sensors are widely deployed across robotic platforms, and advances in fast, high-fidelity depth simulation have enabled robotic policies trained on depth observations to achieve robust sim-to-real transfer for a wide range of tasks. Despite this, representation learning for depth modality remains underexplored compared to RGB, where large-scale foundation models now define the state of the art. To address this gap, we present DeFM, a self-supervised foundation model trained entirely on depth images for robotic applications. Using a DINO-style self-distillation objective on a curated dataset of 60M depth images, DeFM learns geometric and semantic representations that generalize to diverse environments, tasks, and sensors. To retain metric awareness across multiple scales, we introduce a novel input normalization strategy. We further distill DeFM into compact models suitable for resource-constrained robotic systems. When evaluated on depth-based classification, segmentation, navigation, locomotion, and manipulation benchmarks, DeFM achieves state-of-the-art performance and demonstrates strong generalization from simulation to real-world environments. We release all our pretrained models, which can be adopted off-the-shelf for depth-based robotic learning without task-specific fine-tuning. Webpage: https://de-fm.github.io/

  15. HyperAlign: Hypernetwork for Efficient Test-Time Alignment of Diffusion Models

    Diffusion models achieve state-of-the-art performance but often fail to generate outputs that align with human preferences and intentions, resulting in images with poor aesthetic quality and semantic inconsistencies. Existing alignment methods present a difficult trade-off: fine-tuning approaches suffer from loss of diversity with reward over-optimization, while test-time scaling methods introduce significant computational overhead and tend to under-optimize. To address these limitations, we propose HyperAlign, a novel framework that trains a hypernetwork for efficient and effective test-time alignment. Instead of modifying latent states, HyperAlign dynamically generates low-rank adaptation weights to modulate the diffusion model's generation operators. This allows the denoising trajectory to be adaptively adjusted based on input latents, timesteps and prompts for reward-conditioned alignment. We introduce multiple variants of HyperAlign that differ in how frequently the hypernetwork is applied, balancing between performance and efficiency. Furthermore, we optimize the hypernetwork using a reward score objective regularized with preference data to reduce reward hacking. We evaluate HyperAlign on multiple extended generative paradigms, including Stable Diffusion and FLUX. It significantly outperforms existing fine-tuning and test-time scaling baselines in enhancing semantic consistency and visual appeal.

Solidot(15)

  1. 末日时钟被设定距离午夜 85 秒

    《原子科学家公报(Bulletin of the Atomic Scientists)》将末日时钟设定距离午夜 85 秒。这是自 1947 年冷战时期科学家创建末日时钟衡量人类文明距离灭绝有多远以来距离理论上的毁灭时刻最近的一次。《原子科学家公报》列举了毁灭风险上升的因素:三大核大国咄咄逼人的行为、脆弱的核军控框架、乌克兰和中东持续的冲突、AI 不受监管的集成到军事系统,以及气候变化等。

  2. 微软错误配置将 example.com 流量重路由到日本公司域名

    微软被发现将专门用于测试的 example.com 的流量重路由到日本住友电工的域名 sei.co.jp。该错误配置已经修正,微软表示正对此展开调查。example.com 以及 example.net 和 example.org 是保留用于测试的域名,被要求解析到 IANA 指定的 IP,不应该被任何一方访问。但 Azure 和其它微软网络中的设备此前被发现一直在将部分 example.com 流量路由到 sei.co.jp 的子域名。而在 Outlook 中设置测试账号 test@example.com 时邮件流量会自动配置路由到两个 sei.co.jp 子域名:imapgms.jnet.sei.co.jp 和 smtpgms.jnet.sei.co.jp。目前不清楚住友电工为什么会卷入此事。 Tinyapps.org 本月初报道称,该错误配置已存在五年之久。

  3. 美国政府去年有逾万名 STEM 博士离职

    根据《科学》期刊的分析,因特朗普政府大幅削减联邦政府雇员规模,有 10109 名 STEM 领域的博士离职,虽然只占离职联邦雇员总人数的 3%,但占到了政府 STEM 博士雇员总数的 14%。分析显示,去年离职人数与新入职人数比高达 11:1,STEM 领域博士净流失 4224 人。美国 CDC 有 519 名 STEM 博士离职,其中 16% 收到了裁员通知。但大部分联邦机构没有裁减 STEM 博士雇员,离职的 STEM 博士多数是退休和辞职。NIH 离职 STEM 博士人数超过 1100 人。

  4. 利用电和空气生成汽油正走向现实

    美国创业公司 Aircela 准备推出能利用电和空气生成汽油的机器。其工作原理分为三步。从空气中捕获二氧化碳和水蒸气,水利用电解分解成氢气和氧气,氧气释放;留下的氢气和二氧化碳混合物利用名为二氧化碳直接加氢制甲醇的方法制造出甲醇。赛车能使用甲醇,但普通汽车不能,因此 Aircela 的机器最后一步是将甲醇转换为汽油。Aircela 的机器每天大约能生产一加仑汽油,其容器最大能储存 17 加仑汽油。如果用户不经常开车,那么这种机器可以给汽车加满油。机器的目标价格是 1.5 万-2 万美元之间,该公司希望量产后价格能下降。机器所需的电能大约两倍汽油:一加仑汽油含有约 37kWh 的能量,需要约 75kWh 的电能,因此机器如能组合离网太阳能会比较合算,否则意义不大。

  5. 癌症防止阿尔茨海默病背后的机制

    几十年来,科学家注意到癌症和阿尔茨海默病很少同时发生在同一个人身上,因此引发了一种疾病可能对另一种疾病的发生提供某种防御机制的猜测。根据发表在《细胞》期刊上的一项研究,研究人员基于小鼠实验提供了这一现象背后的分子机制解释:癌细胞产生的蛋白质 cystatin C 能突破大脑屏障,有助于分解与阿尔茨海默病相关的错误折叠的蛋白质斑块。

  6. 探索AI与GPU开发的无限可能,加入 NVIDIA 开发者计划,开启你的加速之旅!

    无论你是学生、研究者还是工程师,这里为您提供排疑解难的网络空间,您可以免费访问一百多款跨越多个行业和用例的软件和性能分析工具,其中包括AI、HPC、自动驾驶汽车、机器人、仿真模拟等。 扫码或点击链接,立即加入 NVIDIA 开发者计划,解锁17项权益,免费访问 SDK、技术文档、获取同行或相关领域专家的帮助,以及获取解决重大挑战所需正确硬件的资料。 https://developer.nvidia.cn/login?ncid=ref-dev-557858&sfdcid=Zhiding

  7. 电视诞生一百周年

    1 月 26 日是电视诞生一百周年。1926 年 1 月 26 日苏格兰发明家 John Logie Baird 在伦敦 22 Frith Street 首次向记者演示了电视。如果不是体弱 Baird 可能永远不会去伦敦,他本想参加一次世界大战,但因为体质差而遭到拒绝。1923 年为了身体健康他搬去了港口城市 Hastings,制造出了第一台电视信号发射设备,但因为设备使用了 1000 伏的电击而被房东要求搬离。1924 年 11 月他搬去了伦敦,改进了电视设备。1926 年 1 月 26 日首次向记者和皇家学会成员正式演示了电视设备。22 Frith Street 的房屋如今挂有三块牌纪念电视的发明。

  8. 法国国民议会通过法案禁止 15 岁以下儿童使用社交媒体

    法国国民议会(下议院)以 130票赞成 21 票反通过了禁止 15 岁以下儿童使用社交媒体的法案,法案成为法律前还需要通过参议院(上议院)审议——上议院通常不会反对下议院通过的法案。法案还禁止高中生使用手机。一旦通过,法国将成为澳大利亚之后第二个禁止儿童使用社交媒体的国家。法国总统 Emmanuel Macron 在一则视频中表示,儿童和青少年的情绪不应该被交易或操纵,不管是美国平台还是中国的算法。如果参议院能在 2 月中旬前通过该法案,社媒禁令将能在 9 月 1 日生效。

  9. 苹果为 iPhone 5s 释出更新延长证书有效期

    在 iPhone 5s 和 iPhone 6 分别发布 13 年和 12 年之后,苹果释出了 iOS 12.5.8,延长了 iMessage、FaceTime 和设备激活等功能所需的证书有效期,使这些功能在 2027 年 1 月之后仍可使用。iPhone 5s 和 iPhone 6 上一次收到软件更新是在 2023 年 1 月,旨在修复严重安全问题。

  10. Windows 11 集成 Copilot 之后状况越来越糟

    自 Windows 11 v24H2 发布以来,微软几乎每个月都会爆出与 Windows 更新相关的丑闻。今年一月更新爆出的问题并非是例外。Windows 11 看起来每况愈下,是过去几十年积累的技术债到达了临界点,还是其它原因比如 AI?除了更新带来的问题,Windows 11 还有系统臃肿的问题,Windows 资源管理器需要预加载以加快运行速度,Windows 更新包的容量也越来越大。对用户影响最大的可能还是 AI 功能。Windows 加入了 Windows Recall,深度集成了 Copilot,而 Copilot 现在已经无处不在:Edge 内置了 Copilot,记事本也有 Edge,画图应用也有,照片查看器也有,Office 也有,资源管理器当然也有。Copilot 的普及意味着不存在什么“离线”版本的 Windows 了。

  11. TikTok 美国调查为何用户无法在私信中提及 Epstein

    TikTok 美国宣布调查为何用户无法在私信中提及 Epstein。TikTok 美国官员否认审查 Epstein 一词,但很多人认为这是显而易见的。TikTok 美国业务的主要股东之一是甲骨文,而甲骨文董事长 Larry Ellison 是美国总统特朗普的盟友,而特朗普过去几个月受困于与 Jeffrey Epstein 的关系。NPR 等媒体的调查显示,对 Epstein 的审查并不连续,部分用户可以在私信中提及 Epstein 但另一部分用户不行。TikTok 美国应用过去一天还遭遇了服务宕机的事故。

  12. Valve 在英国面临 6.56 亿英镑的集体诉讼

    Valve 在英国面临 6.56 亿英镑的集体诉讼。起诉者代表了 自 2018 年以来在 Valve 的 PC 游戏平台 Steam 购买游戏或 DLC 的 1400 万英国用户,诉讼指控 Steam 收取了过高的佣金。Steam 和其它数字平台的佣金比例类似,大约为 30%。起诉者指控 Steam 通过设置条件,阻止游戏发行商在竞争对手平台上以更低的价格或更早的时间销售游戏。此外一旦玩家在 Steam 上购买了游戏,那么游戏后续的 DLC 或附加内容也都必须通过 Steam 购买,事实上将用户“锁定”在 Steam 平台,从中收取“不公平且过高”的佣金。Valve 主张驳回诉讼,但伦敦竞争上诉法庭裁定可以继续推进。

  13. 世界尚未为极端高温做好准备

    科学家预测到 2050 年全球有 37.9 亿人面临极端高温。热带国家将首当其冲,而气候较凉快的地区也需要适应。研究发现,如果全球平均气温比工业化前水平升高 2C,到 2050 年经历极端高温天气的人口预计将翻番。大部分影响将在本十年内显现,因为全球气温正迅速逼近 1.5C 的临界点。高温常被称为“无声杀手”,因为大多数中暑死亡是缓慢发生的,高温和其它环境因素共同作用破坏了人体内部的体温调节机制。气候变化导致热浪持续时间更长、强度更大,空调等降温设备将变得至关重要。

  14. 沙特的未来城市可能变成数据中心枢纽

    沙特准备大幅缩减其雄心勃勃的未来城市项目 Neom 的规模。Neom 意思是新未来城市,沙特原计划斥资 5000 亿美元建造。Neom 位于沙特西北部的 Tabuk,总面积 26500 平方公里,其核心组成部分是名叫 The Line 的线条城市,高 500 米,长 170 公里,能容纳 900 万居民,城市被封闭在两座距离约 200 米的平行高墙之内,表面覆盖镜子。The Line 有三层,地面一层供行人使用,地下两层一层为基础设施一层为地下交通。它还有一条高铁,时速能达到 512 公里,从线条城市的一头到另一头只需要 20 分钟。 但如今 The Line 准备彻底的重新设计,方案将更为精简,Neom 可能将变成数据中心枢纽,利用其沿海位置的海水冷却服务器。

  15. Google AI Overviews 回答健康问题时引用的信息源更多来自 YouTube

    一项研究发现,Google AI Overviews 回答健康问题时引用 YouTube 的次数超过了任何医学网站。Google 曾表示,AI Overviews 生成的摘要是可靠的,会引用权威医疗机构如疾控中心(CDC)等作为信息来源。SE Ranking 研究人员分析了柏林地区逾 5 万次健康查询结果,发现最主要的信息源是 YouTube。YouTube 占到总引用次数的 4.43%,没有医疗机构或医学机构的引用接近这一比例。在总共 465,823 次引用中,YouTube 高达 20,621 次。 Google 对此表示,研究使用了德语进行搜索,因此其结果并不能推广到其它地区。