OrangeBot.AI Digest — 2025-10-05
53 headlines across 8 sources, aggregated for this day.
Hacker News(15)
- Fire destroys S. Korean government's cloud storage system, no backups available (koreajoongangdaily.joins.com)
- NIST's DeepSeek "evaluation" is a hit piece (erichartford.com)
- The QNX Operating System (www.abortretry.fail)
- Link Khan: Activision-Blizzard buyout is 'harming both gamers and developers' (www.pcgamer.com)
- If the University of Chicago won't defend the humanities, who will? (www.theatlantic.com)
- Retiring Test-Ipv6.com (retire.test-ipv6.com)
- The deadline isn't when AI outsmarts us – it's when we stop using our own minds (www.theargumentmag.com)
- Which table format do LLMs understand best? (www.improvingagents.com)
- Self hosting 10TB in S3 on a framework laptop and disks (jamesoclaire.com)
- Beginner Guide to VPS Hetzner and Coolify (bhargav.dev)
- Personal data storage is an idea whose time has come (blog.muni.town)
- Benefits of choosing email over messaging (www.spinellis.gr)
- Social Cooling (2017) (www.socialcooling.com)
- Managing context on the Claude Developer Platform (www.anthropic.com)
- Ambigr.am (ambigr.am)
GitHub Trending(15)
- microsoft / BitNet
Official inference framework for 1-bit LLMs
- Flowseal / zapret-discord-youtube
- juspay / hyperswitch
An open source payments switch written in Rust to make payments fast, reliable and affordable
- meshery / meshery
Meshery, the cloud native manager
- Stremio / stremio-web
Stremio - Freedom to Stream
- glide-browser / glide
An extensible and keyboard-focused web browser
- comfyanonymous / ComfyUI
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
- hsliuping / TradingAgents-CN
基于多智能体LLM的中文金融交易框架 - TradingAgents中文增强版
- kestra-io / kestra
Orchestrate everything - from scripts to data, infra, AI, and business - as code, with UI and AI Copilot. Simple. Fast. Scalable.
- meshtastic / firmware
The official firmware for Meshtastic, an open-source, off-grid mesh communication system.
- pathwaycom / pathway
Python ETL framework for stream processing, real-time analytics, LLM pipelines, and RAG.
- YaLTeR / niri
A scrollable-tiling Wayland compositor.
- audacity / audacity
Audio Editor
- xtekky / gpt4free
The official gpt4free repository | various collection of powerful language models | o4, o3 and deepseek r1, gpt-4.1, gemini 2.5
- airweave-ai / airweave
Airweave lets agents search any app
Hugging Face(15)
- LongCodeZip: Compress Long Context for Code Language Models
Code generation under long contexts is becoming increasingly critical as Large Language Models (LLMs) are required to reason over extensive information in the codebase. While recent advances enable code LLMs to process long inputs, high API costs and generation latency remain substantial bottlenecks. Existing context pruning techniques, such as LLMLingua, achieve promising results for general text but overlook code-specific structures and dependencies, leading to suboptimal performance in programming tasks. In this paper, we propose LongCodeZip, a novel plug-and-play code compression framework designed specifically for code LLMs. LongCodeZip employs a dual-stage strategy: (1) coarse-grained compression, which identifies and ranks function-level chunks using conditional perplexity with respect to the instruction, retaining only the most relevant functions; and (2) fine-grained compression, which segments retained functions into blocks based on perplexity and selects an optimal subset under an adaptive token budget to maximize relevance. Evaluations across multiple tasks, including code completion, summarization, and question answering, show that LongCodeZip consistently outperforms baseline methods, achieving up to a 5.6x compression ratio without degrading task performance. By effectively reducing context size while preserving essential information, LongCodeZip enables LLMs to better scale to real-world, large-scale code scenarios, advancing the efficiency and capability of code intelligence applications.
- Self-Forcing++: Towards Minute-Scale High-Quality Video Generation
Diffusion models have revolutionized image and video generation, achieving unprecedented visual quality. However, their reliance on transformer architectures incurs prohibitively high computational costs, particularly when extending generation to long videos. Recent work has explored autoregressive formulations for long video generation, typically by distilling from short-horizon bidirectional teachers. Nevertheless, given that teacher models cannot synthesize long videos, the extrapolation of student models beyond their training horizon often leads to pronounced quality degradation, arising from the compounding of errors within the continuous latent space. In this paper, we propose a simple yet effective approach to mitigate quality degradation in long-horizon video generation without requiring supervision from long-video teachers or retraining on long video datasets. Our approach centers on exploiting the rich knowledge of teacher models to provide guidance for the student model through sampled segments drawn from self-generated long videos. Our method maintains temporal consistency while scaling video length by up to 20x beyond teacher's capability, avoiding common issues such as over-exposure and error-accumulation without recomputing overlapping frames like previous methods. When scaling up the computation, our method shows the capability of generating videos up to 4 minutes and 15 seconds, equivalent to 99.9% of the maximum span supported by our base model's position embedding and more than 50x longer than that of our baseline model. Experiments on standard benchmarks and our proposed improved benchmark demonstrate that our approach substantially outperforms baseline methods in both fidelity and consistency. Our long-horizon videos demo can be found at https://self-forcing-plus-plus.github.io/
- ExGRPO: Learning to Reason from Experience
Reinforcement learning from verifiable rewards (RLVR) is an emerging paradigm for improving the reasoning ability of large language models. However, standard on-policy training discards rollout experiences after a single update, leading to computational inefficiency and instability. While prior work on RL has highlighted the benefits of reusing past experience, the role of experience characteristics in shaping learning dynamics of large reasoning models remains underexplored. In this paper, we are the first to investigate what makes a reasoning experience valuable and identify rollout correctness and entropy as effective indicators of experience value. Based on these insights, we propose ExGRPO (Experiential Group Relative Policy Optimization), a framework that organizes and prioritizes valuable experiences, and employs a mixed-policy objective to balance exploration with experience exploitation. Experiments on five backbone models (1.5B-8B parameters) show that ExGRPO consistently improves reasoning performance on mathematical/general benchmarks, with an average gain of +3.5/7.6 points over on-policy RLVR. Moreover, ExGRPO stabilizes training on both stronger and weaker models where on-policy methods fail. These results highlight principled experience management as a key ingredient for efficient and scalable RLVR.
- StealthAttack: Robust 3D Gaussian Splatting Poisoning via Density-Guided Illusions
3D scene representation methods like Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS) have significantly advanced novel view synthesis. As these methods become prevalent, addressing their vulnerabilities becomes critical. We analyze 3DGS robustness against image-level poisoning attacks and propose a novel density-guided poisoning method. Our method strategically injects Gaussian points into low-density regions identified via Kernel Density Estimation (KDE), embedding viewpoint-dependent illusory objects clearly visible from poisoned views while minimally affecting innocent views. Additionally, we introduce an adaptive noise strategy to disrupt multi-view consistency, further enhancing attack effectiveness. We propose a KDE-based evaluation protocol to assess attack difficulty systematically, enabling objective benchmarking for future research. Extensive experiments demonstrate our method's superior performance compared to state-of-the-art techniques. Project page: https://hentci.github.io/stealthattack/
- StockBench: Can LLM Agents Trade Stocks Profitably In Real-world Markets?
Large language models (LLMs) have recently demonstrated strong capabilities as autonomous agents, showing promise in reasoning, tool use, and sequential decision-making. While prior benchmarks have evaluated LLM agents in domains such as software engineering and scientific discovery, the finance domain remains underexplored, despite its direct relevance to economic value and high-stakes decision-making. Existing financial benchmarks primarily test static knowledge through question answering, but they fall short of capturing the dynamic and iterative nature of trading. To address this gap, we introduce StockBench, a contamination-free benchmark designed to evaluate LLM agents in realistic, multi-month stock trading environments. Agents receive daily market signals -- including prices, fundamentals, and news -- and must make sequential buy, sell, or hold decisions. Performance is assessed using financial metrics such as cumulative return, maximum drawdown, and the Sortino ratio. Our evaluation of state-of-the-art proprietary (e.g., GPT-5, Claude-4) and open-weight (e.g., Qwen3, Kimi-K2, GLM-4.5) models shows that while most LLM agents struggle to outperform the simple buy-and-hold baseline, several models demonstrate the potential to deliver higher returns and manage risk more effectively. These findings highlight both the challenges and opportunities in developing LLM-powered financial agents, showing that excelling at static financial knowledge tasks does not necessarily translate into successful trading strategies. We release StockBench as an open-source resource to support reproducibility and advance future research in this domain.
- Interactive Training: Feedback-Driven Neural Network Optimization
Traditional neural network training typically follows fixed, predefined optimization recipes, lacking the flexibility to dynamically respond to instabilities or emerging training issues. In this paper, we introduce Interactive Training, an open-source framework that enables real-time, feedback-driven intervention during neural network training by human experts or automated AI agents. At its core, Interactive Training uses a control server to mediate communication between users or agents and the ongoing training process, allowing users to dynamically adjust optimizer hyperparameters, training data, and model checkpoints. Through three case studies, we demonstrate that Interactive Training achieves superior training stability, reduced sensitivity to initial hyperparameters, and improved adaptability to evolving user needs, paving the way toward a future training paradigm where AI agents autonomously monitor training logs, proactively resolve instabilities, and optimize training dynamics.
- RLP: Reinforcement as a Pretraining Objective
The dominant paradigm for training large reasoning models starts with pre-training using next-token prediction loss on vast amounts of data. Reinforcement learning, while powerful in scaling reasoning, is introduced only as the very last phase of post-training, preceded by supervised fine-tuning. While dominant, is this an optimal way of training? In this paper, we present RLP, an information-driven reinforcement pretraining objective, that brings the core spirit of reinforcement learning -- exploration -- to the last phase of pretraining. The key idea is to treat chain-of-thought as an exploratory action, with rewards computed based on the information gain it provides for predicting future tokens. This training objective essentially encourages the model to think for itself before predicting what comes next, thus teaching an independent thinking behavior earlier in the pretraining. More concretely, the reward signal measures the increase in log-likelihood of the next token when conditioning on both context and a sampled reasoning chain, compared to conditioning on context alone. This approach yields a verifier-free dense reward signal, allowing for efficient training for the full document stream during pretraining. Specifically, RLP reframes reinforcement learning for reasoning as a pretraining objective on ordinary text, bridging the gap between next-token prediction and the emergence of useful chain-of-thought reasoning. Pretraining with RLP on Qwen3-1.7B-Base lifts the overall average across an eight-benchmark math-and-science suite by 19%. With identical post-training, the gains compound, with the largest improvements on reasoning-heavy tasks such as AIME25 and MMLU-Pro. Applying RLP to the hybrid Nemotron-Nano-12B-v2 increases the overall average from 42.81% to 61.32% and raises the average on scientific reasoning by 23%, demonstrating scalability across architectures and model sizes.
- ModernVBERT: Towards Smaller Visual Document Retrievers
Multimodal embedding models are gaining prevalence, notably for document retrieval as efficient alternatives to text-only pipelines. These models are typically built by finetuning large vision-language decoders (VLMs) with contrastive losses on text-image pairs. In this work, we show that, while cost-efficient, this repurposing approach often bottlenecks retrieval performance. Through controlled experiments, we establish a principled recipe for improving visual document retrieval models. We notably measure the impact of attention masking, image resolution, modality alignment data regimes, and late interaction centered contrastive objectives which emerge as central performance factors. Building on these insights, we release ModernVBERT, a compact 250M-parameter vision-language encoder that outperforms models up to 10 times larger when finetuned on document retrieval tasks. Models and code are made available at https://huggingface.co/ModernVBERT.
- Tree-based Dialogue Reinforced Policy Optimization for Red-Teaming Attacks
Despite recent rapid progress in AI safety, current large language models remain vulnerable to adversarial attacks in multi-turn interaction settings, where attackers strategically adapt their prompts across conversation turns and pose a more critical yet realistic challenge. Existing approaches that discover safety vulnerabilities either rely on manual red-teaming with human experts or employ automated methods using pre-defined templates and human-curated attack data, with most focusing on single-turn attacks. However, these methods did not explore the vast space of possible multi-turn attacks, failing to consider novel attack trajectories that emerge from complex dialogue dynamics and strategic conversation planning. This gap is particularly critical given recent findings that LLMs exhibit significantly higher vulnerability to multi-turn attacks compared to single-turn attacks. We propose DialTree-RPO, an on-policy reinforcement learning framework integrated with tree search that autonomously discovers diverse multi-turn attack strategies by treating the dialogue as a sequential decision-making problem, enabling systematic exploration without manually curated data. Through extensive experiments, our approach not only achieves more than 25.9% higher ASR across 10 target models compared to previous state-of-the-art approaches, but also effectively uncovers new attack strategies by learning optimal dialogue policies that maximize attack success across multiple turns.
- Ovi: Twin Backbone Cross-Modal Fusion for Audio-Video Generation
Audio-video generation has often relied on complex multi-stage architectures or sequential synthesis of sound and visuals. We introduce Ovi, a unified paradigm for audio-video generation that models the two modalities as a single generative process. By using blockwise cross-modal fusion of twin-DiT modules, Ovi achieves natural synchronization and removes the need for separate pipelines or post hoc alignment. To facilitate fine-grained multimodal fusion modeling, we initialize an audio tower with an architecture identical to that of a strong pretrained video model. Trained from scratch on hundreds of thousands of hours of raw audio, the audio tower learns to generate realistic sound effects, as well as speech that conveys rich speaker identity and emotion. Fusion is obtained by jointly training the identical video and audio towers via blockwise exchange of timing (via scaled-RoPE embeddings) and semantics (through bidirectional cross-attention) on a vast video corpus. Our model enables cinematic storytelling with natural speech and accurate, context-matched sound effects, producing movie-grade video clips. All the demos, code and model weights are published at https://aaxwaz.github.io/Ovi
- The Rogue Scalpel: Activation Steering Compromises LLM Safety
Activation steering is a promising technique for controlling LLM behavior by adding semantically meaningful vectors directly into a model's hidden states during inference. It is often framed as a precise, interpretable, and potentially safer alternative to fine-tuning. We demonstrate the opposite: steering systematically breaks model alignment safeguards, making it comply with harmful requests. Through extensive experiments on different model families, we show that even steering in a random direction can increase the probability of harmful compliance from 0% to 2-27%. Alarmingly, steering benign features from a sparse autoencoder (SAE), a common source of interpretable directions, increases these rates by a further 2-4%. Finally, we show that combining 20 randomly sampled vectors that jailbreak a single prompt creates a universal attack, significantly increasing harmful compliance on unseen requests. These results challenge the paradigm of safety through interpretability, showing that precise control over model internals does not guarantee precise control over model behavior.
- VOGUE: Guiding Exploration with Visual Uncertainty Improves Multimodal Reasoning
Reinforcement learning with verifiable rewards (RLVR) improves reasoning in large language models (LLMs) but struggles with exploration, an issue that still persists for multimodal LLMs (MLLMs). Current methods treat the visual input as a fixed, deterministic condition, overlooking a critical source of ambiguity and struggling to build policies robust to plausible visual variations. We introduce VOGUE (Visual Uncertainty Guided Exploration), a novel method that shifts exploration from the output (text) to the input (visual) space. By treating the image as a stochastic context, VOGUE quantifies the policy's sensitivity to visual perturbations using the symmetric KL divergence between a "raw" and "noisy" branch, creating a direct signal for uncertainty-aware exploration. This signal shapes the learning objective via an uncertainty-proportional bonus, which, combined with a token-entropy bonus and an annealed sampling schedule, effectively balances exploration and exploitation. Implemented within GRPO on two model scales (Qwen2.5-VL-3B/7B), VOGUE boosts pass@1 accuracy by an average of 2.6% on three visual math benchmarks and 3.7% on three general-domain reasoning benchmarks, while simultaneously increasing pass@4 performance and mitigating the exploration decay commonly observed in RL fine-tuning. Our work shows that grounding exploration in the inherent uncertainty of visual inputs is an effective strategy for improving multimodal reasoning.
- The Unreasonable Effectiveness of Scaling Agents for Computer Use
Computer-use agents (CUAs) hold promise for automating everyday digital tasks, but their unreliability and high variance hinder their application to long-horizon, complex tasks. We introduce Behavior Best-of-N (bBoN), a method that scales over agents by generating multiple rollouts and selecting among them using behavior narratives that describe the agents' rollouts. It enables both wide exploration and principled trajectory selection, substantially improving robustness and success rates. On OSWorld, our bBoN scaling method establishes a new state of the art (SoTA) at 69.9%, significantly outperforming prior methods and approaching human-level performance at 72%, with comprehensive ablations validating key design choices. We further demonstrate strong generalization results to different operating systems on WindowsAgentArena and AndroidWorld. Crucially, our results highlight the unreasonable effectiveness of scaling CUAs, when you do it right: effective scaling requires structured trajectory understanding and selection, and bBoN provides a practical framework to achieve this.
- RewardMap: Tackling Sparse Rewards in Fine-grained Visual Reasoning via Multi-Stage Reinforcement Learning
Fine-grained visual reasoning remains a core challenge for multimodal large language models (MLLMs). The recently introduced ReasonMap highlights this gap by showing that even advanced MLLMs struggle with spatial reasoning in structured and information-rich settings such as transit maps, a task of clear practical and scientific importance. However, standard reinforcement learning (RL) on such tasks is impeded by sparse rewards and unstable optimization. To address this, we first construct ReasonMap-Plus, an extended dataset that introduces dense reward signals through Visual Question Answering (VQA) tasks, enabling effective cold-start training of fine-grained visual understanding skills. Next, we propose RewardMap, a multi-stage RL framework designed to improve both visual understanding and reasoning capabilities of MLLMs. RewardMap incorporates two key designs. First, we introduce a difficulty-aware reward design that incorporates detail rewards, directly tackling the sparse rewards while providing richer supervision. Second, we propose a multi-stage RL scheme that bootstraps training from simple perception to complex reasoning tasks, offering a more effective cold-start strategy than conventional Supervised Fine-Tuning (SFT). Experiments on ReasonMap and ReasonMap-Plus demonstrate that each component of RewardMap contributes to consistent performance gains, while their combination yields the best results. Moreover, models trained with RewardMap achieve an average improvement of 3.47% across 6 benchmarks spanning spatial reasoning, fine-grained visual reasoning, and general tasks beyond transit maps, underscoring enhanced visual understanding and reasoning capabilities.
- F2LLM Technical Report: Matching SOTA Embedding Performance with 6 Million Open-Source Data
We introduce F2LLM - Foundation to Feature Large Language Models, a suite of state-of-the-art embedding models in three sizes: 0.6B, 1.7B, and 4B. Unlike previous top-ranking embedding models that require massive contrastive pretraining, sophisticated training pipelines, and costly synthetic training data, F2LLM is directly finetuned from foundation models on 6 million query-document-negative tuples curated from open-source, non-synthetic datasets, striking a strong balance between training cost, model size, and embedding performance. On the MTEB English leaderboard, F2LLM-4B ranks 2nd among models with approximately 4B parameters and 7th overall, while F2LLM-1.7B ranks 1st among models in the 1B-2B size range. To facilitate future research in the field, we release the models, training dataset, and code, positioning F2LLM as a strong, reproducible, and budget-friendly baseline for future works.
Solidot(8)
- 为什么女性比男性更长寿
女性通常比男性更长寿。传统的解释包括男性抽了更多烟,饮了更多酒,从事了更危险的行为。但不管哪个国家,不论哪个世纪,男女之间的寿命差距都存在,这表明还存在更深层次的原因。发表在《Science Advances》期刊上的一项研究再次证实,这一现象可能与女性有两个 X 染色体有关,一个冗余的染色体能帮助女性抵御有害突变。研究人员分析了动物园饲养的 528 种哺乳动物和 648 种鸟类的寿命数据,发现大多数哺乳动物与人类相似,近四分之三的哺乳动物雌性寿命比雄性长。而在鸟类中,68% 的鸟类雄性寿命更长,这是因为鸟类雌性有一对不同的染色体,而雄性的一对性染色体相同。
- 自由软件基金会庆祝四十周年,任命 Ian Kelling 为新主席
自由软件基金会(FSF)庆祝了诞生四十周年,向自由软件社区介绍了该组织理事会的新主席 Ian Kelling。FSF 成立于 1985 年 10 月 4 日,致力于推广自由软件,执行 GNU 计划。现任理事会成员包括了 Christina Haralanova、Geoffrey Knauth(财务主管)、Gerald J. Sussman、Ian Kelling 和 Richard M. Stallman(创始人)。Ian Kelling 现年 43 岁,自 2021 年起担任理事会成员和投票成员,是一位活跃的演讲者和博主,他表示将致力于加强 FSF 应对计算机用户自由新威胁的能力,将比以往任何时候欢迎更多自由软件支持者加入这项运动。
- 大曼彻斯特警署因有警官使用自动按键工具假装工作暂停了远程办公
有 12,677 名员工的大曼彻斯特警署(Greater Manchester Police),由于近期的调查发现有警员使用自动按键工具假装工作而暂停了远程办公,有 26 名警员、工作人员和合同工因行为不当而遭到起诉。根据调查,一名警员作证,一名警探在 12 天内 38 次让自己的电脑看起来像在使用中。证据显示,在很长时间里他唯一的活动是单次按键,12 月 3 日 10:28 到 11:56 GMT 之间,他按了 H 键约 30 次,之后按了 I 键逾 16000 次。在总共 85 小时的登录时间中,有 45 个小时使用了自动按键,他有一半的工作时间不在键盘旁。这名警探已经辞职。
- Opera 推出月费 19.9 美元的 AI 浏览器
Opera 不想错过 AI 热,它推出了一款 AI 浏览器 Opera Neon,前 9 个月价格 59.90 美元,之后每月 19.90 美元。Opera Neon 主要使用了云端运行的大模型,任务是浏览器的核心概念,Neon 利用 AI 为用户执行各种任务,Opera 称:“Neon 会按照你的指令行动,打开标签页、进行研究、寻找最优价格、评估安全性,无论你需要什么。它提供的结果可供你使用、共享和构建。”另一家 AI 公司 Perplexity 也发布了它的 AI 浏览器 Comet,用户可免费使用,可选择支付 5 美元获得 AI 新闻服务。
- AI 训练数据已经耗尽
高盛首席数据官兼数据工程主管 Neema Raphael 表示 AI 训练数据已经耗尽,而数据短缺正在重塑 AI 公司构建新 AI 系统的方式。AI 公司已经在使用合成数据——机器生成的材料,供应无限但存在质量风险。Raphael 并不认为缺乏新数据会成为巨大的制约因素。从企业角度而言,现有的数据仍然有巨大的潜力可以挖掘。挑战在于:理解数据,理解数据的业务背景,然后标准化数据。
- 土卫二(Enceladus)发现更多复杂有机分子
科学家重新检视卡西尼号近二十年前的资料,惊喜地在土星卫星土卫二(Enceladus)的羽状喷流中,找到更多复杂有机分子。这些分子来自隐藏在冰地壳下的地下海洋,显示其中正进行着复杂且活跃的化学反应。这意味着土卫二可能具备孕育生命的条件。这些化合物和地球海底热泉环境中的化学反应物质非常类似。在地球上,黑暗深海中的海底热液喷泉所释放的化学能,足以支持喷泉周遭的生命生存,即使阳光无法到达此处,也能形成丰富的生态系统。这让科学家推测,土卫二也可能存在相似的海底热泉环境。卡西尼号在 2005 到 2015 年间,多次穿越土卫二羽状喷流,收集到大量冰粒与气体。
- 印尼暂停 TikTok 注册资格
印度尼西亚政府宣布,由于短视频应用 TikTok 未能按要求提交所有与直播功能相关的数据,当局暂停它作为电子服务系统供应商的注册资格。印尼通信与数码部长 Alexander Sabar 周五发声明说,在最近的全国抗议活动中,部分涉及网络赌博的账户,利用 TikTok 直播功能牟利。当局 9 月 16 日传唤 TikTok 进行直接说明,并要求平台在 9 月 25 日前提交完整的流量、直播和变现数据。TikTok 在 9 月 23 日回复的信函中说,由于公司内部政策和程序对数据请求有规定,因此无法提供数据。印尼通信与数码部认定 TikTok 违反作为私营电子服务供应商的义务,因此决定暂停它的注册资格。TikTok 在印尼拥有超过 1 亿用户。目前尚不明确印尼境内是否已全面封禁 TikTok,但上述声明发布后,这款应用当天在印尼仍可正常使用。
- 韩国政府数据中心发生火灾
韩国国家数据中心国家信息资源管理院于 9 月 26 日发生了一机房 UPS 锂电池组在搬运期间起火的事故,火情在约 22 小时后才扑灭,事故导致韩国大批政府和公共机构信息系统停运。因此次事故停摆的信息系统共 647 个,其中 96 个系统遭火灾损毁,551 个被人为关闭以防数据丢失。96 个损毁的系统将迁移至大邱分中心重建,预计耗时约四周,受直接影响的系统包括了韩国政府信访平台“国民申闻鼓”、国家法令信息中心、反恐中心官网、跨政府数据分析系统、政策简报官网等国家一级核心行政系统。其余未受直接影响的系统正陆续恢复。10 月 3 日上午,政府电算网故障应急处置负责人跳楼身亡,原因未知,初步判断是自杀,死者未因为火灾而列为调查对象。