DIGEST · 2025-10-06

OrangeBot.AI Digest — 2025-10-06

53 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. OpenZL: An open source format-aware compression framework (engineering.fb.com)
  2. Apps SDK (developers.openai.com)
  3. Ladybird passes the Apple 90% threshold on web-platform-tests (twitter.com)
  4. The AI bubble is 17 times the size of the dot-com frenzy and four times subprime (www.morningstar.com)
  5. Mise: Monorepo Tasks (github.com)
  6. Indefinite Backpack Travel (jeremymaluf.com)
  7. Show HN: Kent Dybvig's Scheme Machine in 400 Lines of C (Heap-Memory Model) (gist.github.com)
  8. Modern messaging: Running your own XMPP server (www.codedge.de)
  9. Show HN: Write It Down – Personal finance tracker (write-it-down.com)
  10. State Terror, American Style (paulkrugman.substack.com)
  11. AMD signs AI chip-supply deal with OpenAI, gives it option to take a 10% stake (www.reuters.com)
  12. Nobel Prize in Physiology or Medicine 2025 (www.nobelprize.org)
  13. Basic Math Textbook: The Napkin Project (web.evanchen.cc)
  14. Gem.coop (gem.coop)
  15. Structured Procrastination (1995) (structuredprocrastination.com)

GitHub Trending(15)

  1. Infisical / infisical

    Infisical is the open-source platform for secrets management, PKI, and SSH access.

  2. meshery / meshery

    Meshery, the cloud native manager

  3. BeehiveInnovations / zen-mcp-server

    The power of Claude Code / GeminiCLI / CodexCLI + [Gemini / OpenAI / OpenRouter / Azure / Grok / Ollama / Custom Model / All Of The Above] working as one.

  4. Stremio / stremio-web

    Stremio - Freedom to Stream

  5. microsoft / BitNet

    Official inference framework for 1-bit LLMs

  6. TapXWorld / ChinaTextbook

    所有小初高、大学PDF教材。

  7. audacity / audacity

    Audio Editor

  8. juspay / hyperswitch

    An open source payments switch written in Rust to make payments fast, reliable and affordable

  9. openemr / openemr

    The most popular open source electronic health records and medical practice management solution.

  10. dgtlmoon / changedetection.io

    Best and simplest tool for website change detection, web page monitoring, and website change alerts. Perfect for tracking content changes, price drops, restock alerts, and website defacement monitoring—all for free or enjoy our SaaS plan!

  11. aandrew-me / ytDownloader

    Desktop App for downloading Videos and Audios from hundreds of sites

  12. zama-ai / fhevm

    FHEVM, a full-stack framework for integrating Fully Homomorphic Encryption (FHE) with blockchain applications

  13. pathwaycom / pathway

    Python ETL framework for stream processing, real-time analytics, LLM pipelines, and RAG.

  14. TheAlgorithms / Python

    All Algorithms implemented in Python

  15. kestra-io / kestra

    Orchestrate everything - from scripts to data, infra, AI, and business - as code, with UI and AI Copilot. Simple. Fast. Scalable.

Hugging Face(15)

  1. Apriel-1.5-15b-Thinker

    We present Apriel-1.5-15B-Thinker, a 15-billion parameter open-weights multimodal reasoning model that achieves frontier-level performance through training design rather than sheer scale. Starting from Pixtral-12B, we apply a progressive three-stage methodology: (1) depth upscaling to expand reasoning capacity without pretraining from scratch, (2) staged continual pre-training that first develops foundational text and vision understanding, then enhances visual reasoning through targeted synthetic data generation addressing spatial structure, compositional understanding, and fine-grained perception, and (3) high-quality text-only supervised fine-tuning on curated instruction-response pairs with explicit reasoning traces spanning mathematics, coding, science, and tool use. Notably, our model achieves competitive results without reinforcement learning or preference optimization, isolating the contribution of our data-centric continual pre-training approach. On the Artificial Analysis Intelligence Index, Apriel-1.5-15B-Thinker attains a score of 52, matching DeepSeek-R1-0528 despite requiring significantly fewer computational resources. Across ten image benchmarks, its performance is on average within five points of Gemini-2.5-Flash and Claude Sonnet-3.7, a key achievement for a model operating within single-GPU deployment constraints. Our results demonstrate that thoughtful mid-training 2 design can close substantial capability gaps without massive scale, making frontier-level multimodal reasoning accessible to organizations with limited infrastructure. We release the model checkpoint, all training recipes, and evaluation protocols under the MIT license to to advance open-source research.

  2. Large Reasoning Models Learn Better Alignment from Flawed Thinking

    Large reasoning models (LRMs) "think" by generating structured chain-of-thought (CoT) before producing a final answer, yet they still lack the ability to reason critically about safety alignment and are easily biased when a flawed premise is injected into their thought process. We propose RECAP (Robust Safety Alignment via Counter-Aligned Prefilling), a principled reinforcement learning (RL) method for post-training that explicitly teaches models to override flawed reasoning trajectories and reroute to safe and helpful responses. RECAP trains on a mixture of synthetically generated counter-aligned CoT prefills and standard prompts, requires no additional training cost or modifications beyond vanilla reinforcement learning from human feedback (RLHF), and substantially improves safety and jailbreak robustness, reduces overrefusal, and preserves core reasoning capability -- all while maintaining inference token budget. Extensive analysis shows that RECAP-trained models engage in self-reflection more frequently and remain robust under adaptive attacks, preserving safety even after repeated attempts to override their reasoning.

  3. Efficient Multi-modal Large Language Models via Progressive Consistency Distillation

    Visual tokens consume substantial computational resources in multi-modal large models (MLLMs), significantly compromising their efficiency. Recent works have attempted to improve efficiency by compressing visual tokens during training, either through modifications to model components or by introducing additional parameters. However, they often overlook the increased learning difficulty caused by such compression, as the model's parameter space struggles to quickly adapt to the substantial perturbations in the feature space induced by token compression. In this work, we propose to develop Efficient MLLMs via Progressive Consistency Distillation (EPIC), a progressive learning framework. Specifically, by decomposing the feature space perturbations introduced by token compression along the token-wise and layer-wise dimensions, we introduce token consistency distillation and layer consistency distillation, respectively, aiming to reduce the training difficulty by leveraging guidance from a teacher model and following a progressive learning trajectory. Extensive experiments demonstrate the superior effectiveness, robustness, and generalization capabilities of our proposed framework.

  4. Compose Your Policies! Improving Diffusion-based or Flow-based Robot Policies via Test-time Distribution-level Composition

    Diffusion-based models for robotic control, including vision-language-action (VLA) and vision-action (VA) policies, have demonstrated significant capabilities. Yet their advancement is constrained by the high cost of acquiring large-scale interaction datasets. This work introduces an alternative paradigm for enhancing policy performance without additional model training. Perhaps surprisingly, we demonstrate that the composed policies can exceed the performance of either parent policy. Our contribution is threefold. First, we establish a theoretical foundation showing that the convex composition of distributional scores from multiple diffusion models can yield a superior one-step functional objective compared to any individual score. A Gr\"onwall-type bound is then used to show that this single-step improvement propagates through entire generation trajectories, leading to systemic performance gains. Second, motivated by these results, we propose General Policy Composition (GPC), a training-free method that enhances performance by combining the distributional scores of multiple pre-trained policies via a convex combination and test-time search. GPC is versatile, allowing for the plug-and-play composition of heterogeneous policies, including VA and VLA models, as well as those based on diffusion or flow-matching, irrespective of their input visual modalities. Third, we provide extensive empirical validation. Experiments on Robomimic, PushT, and RoboTwin benchmarks, alongside real-world robotic evaluations, confirm that GPC consistently improves performance and adaptability across a diverse set of tasks. Further analysis of alternative composition operators and weighting strategies offers insights into the mechanisms underlying the success of GPC. These results establish GPC as a simple yet effective method for improving control performance by leveraging existing policies.

  5. CoDA: Agentic Systems for Collaborative Data Visualization

    Deep research has revolutionized data analysis, yet data scientists still devote substantial time to manually crafting visualizations, highlighting the need for robust automation from natural language queries. However, current systems struggle with complex datasets containing multiple files and iterative refinement. Existing approaches, including simple single- or multi-agent systems, often oversimplify the task, focusing on initial query parsing while failing to robustly manage data complexity, code errors, or final visualization quality. In this paper, we reframe this challenge as a collaborative multi-agent problem. We introduce CoDA, a multi-agent system that employs specialized LLM agents for metadata analysis, task planning, code generation, and self-reflection. We formalize this pipeline, demonstrating how metadata-focused analysis bypasses token limits and quality-driven refinement ensures robustness. Extensive evaluations show CoDA achieves substantial gains in the overall score, outperforming competitive baselines by up to 41.5%. This work demonstrates that the future of visualization automation lies not in isolated code generation but in integrated, collaborative agentic workflows.

  6. Bridging the Gap Between Promise and Performance for Microscaling FP4 Quantization

    The recent hardware-accelerated microscaling 4-bit floating-point formats such as MXFP4 and NVFP4, supported on NVIDIA and AMD GPUs, promise to revolutionize large language model (LLM) inference. Yet, their practical benefits remain unproven. We present the first comprehensive study of MXFP4 and NVFP4 for post-training quantization, revealing gaps between their promise and real-world performance. Our analysis shows that state-of-the-art methods struggle with FP4, due to two key issues: (1) NVFP4's small group size provably neutralizes traditional outlier mitigation techniques; (2) MXFP4's power-of-two scale quantization severely degrades accuracy due to high induced error. To bridge this gap, we introduce Micro-Rotated-GPTQ (MR-GPTQ), a variant of the classic GPTQ quantization algorithm that tailors the quantization process to FP4's unique properties, by using block-wise Hadamard transforms and format-specific optimizations. We support our proposal with a set of high-performance GPU kernels that enable the MR-GPTQ format with negligible overhead, by rotation fusion into the weights, and fast online computation of the activations. This leads to speedups vs. FP16 of up to 3.6x layer-wise, and 2.2x end-to-end on NVIDIA B200, and of 6x layer-wise and 4x end-to-end on RTX5090. Our extensive empirical evaluation demonstrates that MR-GPTQ matches or outperforms state-of-the-art accuracy, significantly boosting MXFP4, to the point where it nears that of NVFP4. We conclude that, while FP4 is not an automatic upgrade over INT4, format-specialized methods like MR-GPTQ can unlock a new frontier of accuracy-performance trade-offs.

  7. Self-Improvement in Multimodal Large Language Models: A Survey

    Recent advancements in self-improvement for Large Language Models (LLMs) have efficiently enhanced model capabilities without significantly increasing costs, particularly in terms of human effort. While this area is still relatively young, its extension to the multimodal domain holds immense potential for leveraging diverse data sources and developing more general self-improving models. This survey is the first to provide a comprehensive overview of self-improvement in Multimodal LLMs (MLLMs). We provide a structured overview of the current literature and discuss methods from three perspectives: 1) data collection, 2) data organization, and 3) model optimization, to facilitate the further development of self-improvement in MLLMs. We also include commonly used evaluations and downstream applications. Finally, we conclude by outlining open challenges and future research directions.

  8. Your Agent May Misevolve: Emergent Risks in Self-evolving LLM Agents

    Advances in Large Language Models (LLMs) have enabled a new class of self-evolving agents that autonomously improve through interaction with the environment, demonstrating strong capabilities. However, self-evolution also introduces novel risks overlooked by current safety research. In this work, we study the case where an agent's self-evolution deviates in unintended ways, leading to undesirable or even harmful outcomes. We refer to this as Misevolution. To provide a systematic investigation, we evaluate misevolution along four key evolutionary pathways: model, memory, tool, and workflow. Our empirical findings reveal that misevolution is a widespread risk, affecting agents built even on top-tier LLMs (e.g., Gemini-2.5-Pro). Different emergent risks are observed in the self-evolutionary process, such as the degradation of safety alignment after memory accumulation, or the unintended introduction of vulnerabilities in tool creation and reuse. To our knowledge, this is the first study to systematically conceptualize misevolution and provide empirical evidence of its occurrence, highlighting an urgent need for new safety paradigms for self-evolving agents. Finally, we discuss potential mitigation strategies to inspire further research on building safer and more trustworthy self-evolving agents. Our code and data are available at https://github.com/ShaoShuai0605/Misevolution . Warning: this paper includes examples that may be offensive or harmful in nature.

  9. OrtSAE: Orthogonal Sparse Autoencoders Uncover Atomic Features

    Sparse autoencoders (SAEs) are a technique for sparse decomposition of neural network activations into human-interpretable features. However, current SAEs suffer from feature absorption, where specialized features capture instances of general features creating representation holes, and feature composition, where independent features merge into composite representations. In this work, we introduce Orthogonal SAE (OrtSAE), a novel approach aimed to mitigate these issues by enforcing orthogonality between the learned features. By implementing a new training procedure that penalizes high pairwise cosine similarity between SAE features, OrtSAE promotes the development of disentangled features while scaling linearly with the SAE size, avoiding significant computational overhead. We train OrtSAE across different models and layers and compare it with other methods. We find that OrtSAE discovers 9% more distinct features, reduces feature absorption (by 65%) and composition (by 15%), improves performance on spurious correlation removal (+6%), and achieves on-par performance for other downstream tasks compared to traditional SAEs.

  10. REPAIR: Robust Editing via Progressive Adaptive Intervention and Reintegration

    Post-training for large language models (LLMs) is constrained by the high cost of acquiring new knowledge or correcting errors and by the unintended side effects that frequently arise from retraining. To address these issues, we introduce REPAIR (Robust Editing via Progressive Adaptive Intervention and Reintegration), a lifelong editing framework designed to support precise and low-cost model updates while preserving non-target knowledge. REPAIR mitigates the instability and conflicts of large-scale sequential edits through a closed-loop feedback mechanism coupled with dynamic memory management. Furthermore, by incorporating frequent knowledge fusion and enforcing strong locality guards, REPAIR effectively addresses the shortcomings of traditional distribution-agnostic approaches that often overlook unintended ripple effects. Our experiments demonstrate that REPAIR boosts editing accuracy by 10%-30% across multiple model families and significantly reduces knowledge forgetting. This work introduces a robust framework for developing reliable, scalable, and continually evolving LLMs.

  11. SurveyBench: How Well Can LLM(-Agents) Write Academic Surveys?

    Academic survey writing, which distills vast literature into a coherent and insightful narrative, remains a labor-intensive and intellectually demanding task. While recent approaches, such as general DeepResearch agents and survey-specialized methods, can generate surveys automatically (a.k.a. LLM4Survey), their outputs often fall short of human standards and there lacks a rigorous, reader-aligned benchmark for thoroughly revealing their deficiencies. To fill the gap, we propose a fine-grained, quiz-driven evaluation framework SurveyBench, featuring (1) typical survey topics source from recent 11,343 arXiv papers and corresponding 4,947 high-quality surveys; (2) a multifaceted metric hierarchy that assesses the outline quality (e.g., coverage breadth, logical coherence), content quality (e.g., synthesis granularity, clarity of insights), and non-textual richness; and (3) a dual-mode evaluation protocol that includes content-based and quiz-based answerability tests, explicitly aligned with readers' informational needs. Results show SurveyBench effectively challenges existing LLM4Survey approaches (e.g., on average 21% lower than human in content-based evaluation).

  12. TalkPlay-Tools: Conversational Music Recommendation with LLM Tool Calling

    While the recent developments in large language models (LLMs) have successfully enabled generative recommenders with natural language interactions, their recommendation behavior is limited, leaving other simpler yet crucial components such as metadata or attribute filtering underutilized in the system. We propose an LLM-based music recommendation system with tool calling to serve as a unified retrieval-reranking pipeline. Our system positions an LLM as an end-to-end recommendation system that interprets user intent, plans tool invocations, and orchestrates specialized components: boolean filters (SQL), sparse retrieval (BM25), dense retrieval (embedding similarity), and generative retrieval (semantic IDs). Through tool planning, the system predicts which types of tools to use, their execution order, and the arguments needed to find music matching user preferences, supporting diverse modalities while seamlessly integrating multiple database filtering methods. We demonstrate that this unified tool-calling framework achieves competitive performance across diverse recommendation scenarios by selectively employing appropriate retrieval methods based on user queries, envisioning a new paradigm for conversational music recommendation systems.

  13. Game-Time: Evaluating Temporal Dynamics in Spoken Language Models

    Conversational Spoken Language Models (SLMs) are emerging as a promising paradigm for real-time speech interaction. However, their capacity of temporal dynamics, including the ability to manage timing, tempo and simultaneous speaking, remains a critical and unevaluated challenge for conversational fluency. To address this gap, we introduce the Game-Time Benchmark, a framework to systematically assess these temporal capabilities. Inspired by how humans learn a language through language activities, Game-Time consists of basic instruction-following tasks and advanced tasks with temporal constraints, such as tempo adherence and synchronized responses. Our evaluation of diverse SLM architectures reveals a clear performance disparity: while state-of-the-art models handle basic tasks well, many contemporary systems still struggle with fundamental instruction-following. More critically, nearly all models degrade substantially under temporal constraints, exposing persistent weaknesses in time awareness and full-duplex interaction. The Game-Time Benchmark provides a foundation for guiding future research toward more temporally-aware conversational AI. Demos and datasets are available on our project website https://ga642381.github.io/Game-Time.

  14. Triangle Splatting+: Differentiable Rendering with Opaque Triangles

    Reconstructing 3D scenes and synthesizing novel views has seen rapid progress in recent years. Neural Radiance Fields demonstrated that continuous volumetric radiance fields can achieve high-quality image synthesis, but their long training and rendering times limit practicality. 3D Gaussian Splatting (3DGS) addressed these issues by representing scenes with millions of Gaussians, enabling real-time rendering and fast optimization. However, Gaussian primitives are not natively compatible with the mesh-based pipelines used in VR headsets, and real-time graphics applications. Existing solutions attempt to convert Gaussians into meshes through post-processing or two-stage pipelines, which increases complexity and degrades visual quality. In this work, we introduce Triangle Splatting+, which directly optimizes triangles, the fundamental primitive of computer graphics, within a differentiable splatting framework. We formulate triangle parametrization to enable connectivity through shared vertices, and we design a training strategy that enforces opaque triangles. The final output is immediately usable in standard graphics engines without post-processing. Experiments on the Mip-NeRF360 and Tanks & Temples datasets show that Triangle Splatting+achieves state-of-the-art performance in mesh-based novel view synthesis. Our method surpasses prior splatting approaches in visual fidelity while remaining efficient and fast to training. Moreover, the resulting semi-connected meshes support downstream applications such as physics-based simulation or interactive walkthroughs. The project page is https://trianglesplatting2.github.io/trianglesplatting2/.

  15. WAInjectBench: Benchmarking Prompt Injection Detections for Web Agents

    Multiple prompt injection attacks have been proposed against web agents. At the same time, various methods have been developed to detect general prompt injection attacks, but none have been systematically evaluated for web agents. In this work, we bridge this gap by presenting the first comprehensive benchmark study on detecting prompt injection attacks targeting web agents. We begin by introducing a fine-grained categorization of such attacks based on the threat model. We then construct datasets containing both malicious and benign samples: malicious text segments generated by different attacks, benign text segments from four categories, malicious images produced by attacks, and benign images from two categories. Next, we systematize both text-based and image-based detection methods. Finally, we evaluate their performance across multiple scenarios. Our key findings show that while some detectors can identify attacks that rely on explicit textual instructions or visible image perturbations with moderate to high accuracy, they largely fail against attacks that omit explicit instructions or employ imperceptible perturbations. Our datasets and code are released at: https://github.com/Norrrrrrr-lyn/WAInjectBench.

Solidot(8)

  1. 为什么女性比男性更长寿

    女性通常比男性更长寿。传统的解释包括男性抽了更多烟,饮了更多酒,从事了更危险的行为。但不管哪个国家,不论哪个世纪,男女之间的寿命差距都存在,这表明还存在更深层次的原因。发表在《Science Advances》期刊上的一项研究再次证实,这一现象可能与女性有两个 X 染色体有关,一个冗余的染色体能帮助女性抵御有害突变。研究人员分析了动物园饲养的 528 种哺乳动物和 648 种鸟类的寿命数据,发现大多数哺乳动物与人类相似,近四分之三的哺乳动物雌性寿命比雄性长。而在鸟类中,68% 的鸟类雄性寿命更长,这是因为鸟类雌性有一对不同的染色体,而雄性的一对性染色体相同。

  2. 自由软件基金会庆祝四十周年,任命 Ian Kelling 为新主席

    自由软件基金会(FSF)庆祝了诞生四十周年,向自由软件社区介绍了该组织理事会的新主席 Ian Kelling。FSF 成立于 1985 年 10 月 4 日,致力于推广自由软件,执行 GNU 计划。现任理事会成员包括了 Christina Haralanova、Geoffrey Knauth(财务主管)、Gerald J. Sussman、Ian Kelling 和 Richard M. Stallman(创始人)。Ian Kelling 现年 43 岁,自 2021 年起担任理事会成员和投票成员,是一位活跃的演讲者和博主,他表示将致力于加强 FSF 应对计算机用户自由新威胁的能力,将比以往任何时候欢迎更多自由软件支持者加入这项运动。

  3. 大曼彻斯特警署因有警官使用自动按键工具假装工作暂停了远程办公

    有 12,677 名员工的大曼彻斯特警署(Greater Manchester Police),由于近期的调查发现有警员使用自动按键工具假装工作而暂停了远程办公,有 26 名警员、工作人员和合同工因行为不当而遭到起诉。根据调查,一名警员作证,一名警探在 12 天内 38 次让自己的电脑看起来像在使用中。证据显示,在很长时间里他唯一的活动是单次按键,12 月 3 日 10:28 到 11:56 GMT 之间,他按了 H 键约 30 次,之后按了 I 键逾 16000 次。在总共 85 小时的登录时间中,有 45 个小时使用了自动按键,他有一半的工作时间不在键盘旁。这名警探已经辞职。

  4. Opera 推出月费 19.9 美元的 AI 浏览器

    Opera 不想错过 AI 热,它推出了一款 AI 浏览器 Opera Neon,前 9 个月价格 59.90 美元,之后每月 19.90 美元。Opera Neon 主要使用了云端运行的大模型,任务是浏览器的核心概念,Neon 利用 AI 为用户执行各种任务,Opera 称:“Neon 会按照你的指令行动,打开标签页、进行研究、寻找最优价格、评估安全性,无论你需要什么。它提供的结果可供你使用、共享和构建。”另一家 AI 公司 Perplexity 也发布了它的 AI 浏览器 Comet,用户可免费使用,可选择支付 5 美元获得 AI 新闻服务。

  5. AI 训练数据已经耗尽

    高盛首席数据官兼数据工程主管 Neema Raphael 表示 AI 训练数据已经耗尽,而数据短缺正在重塑 AI 公司构建新 AI 系统的方式。AI 公司已经在使用合成数据——机器生成的材料,供应无限但存在质量风险。Raphael 并不认为缺乏新数据会成为巨大的制约因素。从企业角度而言,现有的数据仍然有巨大的潜力可以挖掘。挑战在于:理解数据,理解数据的业务背景,然后标准化数据。

  6. 土卫二(Enceladus)发现更多复杂有机分子

    科学家重新检视卡西尼号近二十年前的资料,惊喜地在土星卫星土卫二(Enceladus)的羽状喷流中,找到更多复杂有机分子。这些分子来自隐藏在冰地壳下的地下海洋,显示其中正进行着复杂且活跃的化学反应。这意味着土卫二可能具备孕育生命的条件。这些化合物和地球海底热泉环境中的化学反应物质非常类似。在地球上,黑暗深海中的海底热液喷泉所释放的化学能,足以支持喷泉周遭的生命生存,即使阳光无法到达此处,也能形成丰富的生态系统。这让科学家推测,土卫二也可能存在相似的海底热泉环境。卡西尼号在 2005 到 2015 年间,多次穿越土卫二羽状喷流,收集到大量冰粒与气体。

  7. 印尼暂停 TikTok 注册资格

    印度尼西亚政府宣布,由于短视频应用 TikTok 未能按要求提交所有与直播功能相关的数据,当局暂停它作为电子服务系统供应商的注册资格。印尼通信与数码部长 Alexander Sabar 周五发声明说,在最近的全国抗议活动中,部分涉及网络赌博的账户,利用 TikTok 直播功能牟利。当局 9 月 16 日传唤 TikTok 进行直接说明,并要求平台在 9 月 25 日前提交完整的流量、直播和变现数据。TikTok 在 9 月 23 日回复的信函中说,由于公司内部政策和程序对数据请求有规定,因此无法提供数据。印尼通信与数码部认定 TikTok 违反作为私营电子服务供应商的义务,因此决定暂停它的注册资格。TikTok 在印尼拥有超过 1 亿用户。目前尚不明确印尼境内是否已全面封禁 TikTok,但上述声明发布后,这款应用当天在印尼仍可正常使用。

  8. 韩国政府数据中心发生火灾

    韩国国家数据中心国家信息资源管理院于 9 月 26 日发生了一机房 UPS 锂电池组在搬运期间起火的事故,火情在约 22 小时后才扑灭,事故导致韩国大批政府和公共机构信息系统停运。因此次事故停摆的信息系统共 647 个,其中 96 个系统遭火灾损毁,551 个被人为关闭以防数据丢失。96 个损毁的系统将迁移至大邱分中心重建,预计耗时约四周,受直接影响的系统包括了韩国政府信访平台“国民申闻鼓”、国家法令信息中心、反恐中心官网、跨政府数据分析系统、政策简报官网等国家一级核心行政系统。其余未受直接影响的系统正陆续恢复。10 月 3 日上午,政府电算网故障应急处置负责人跳楼身亡,原因未知,初步判断是自杀,死者未因为火灾而列为调查对象。