DIGEST · 2025-12-13

OrangeBot.AI Digest — 2025-12-13

60 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. VPN location claims don't match real traffic exits (ipinfo.io)
  2. Dick Van Dyke turns 100 (www.theguardian.com)
  3. Analysis finds anytime electricity from solar available as battery costs plummet (pv-magazine-usa.com)
  4. I tried Gleam for Advent of Code (blog.tymscar.com)
  5. What is the nicest thing a stranger has ever done for you? (louplummer.lol)
  6. Ask HN: How can I get better at using AI for programming?
  7. Go Proposal: Secret Mode (antonz.org)
  8. Useful patterns for building HTML tools (simonwillison.net)
  9. LG TV's new software update installed MS Copilot, which cannot be deleted (old.reddit.com)
  10. We built another object storage (fractalbits.com)
  11. How exchanges turn order books into distributed logs (quant.engineering)
  12. YouTube's CEO limits his kids' social media use – other tech bosses do the same (www.cnbc.com)
  13. Computer animator and Amiga fanatic Dick van Dyke turns 100
  14. Photographer built a medium-format rangefinder (petapixel.com)
  15. Beautiful Abelian Sandpiles (eavan.blog)

GitHub Trending(15)

  1. CopilotKit / CopilotKit

    React UI + elegant infrastructure for AI Copilots, AI chatbots, and in-app AI agents. The Agentic last-mile 🪁

  2. DayuanJiang / next-ai-draw-io

    A next.js web application that integrates AI capabilities with draw.io diagrams. This app allows you to create, modify, and enhance diagrams through natural language commands and AI-assisted visualization.

  3. thedotmack / claude-mem

    A Claude Code plugin that automatically captures everything Claude does during your coding sessions, compresses it with AI (using Claude's agent-sdk), and injects relevant context back into future sessions.

  4. mindsdb / mindsdb

    Federated query engine for AI - The only MCP Server you'll ever need

  5. simstudioai / sim

    Open-source platform to build and deploy AI agent workflows.

  6. Tencent / WeKnora

    LLM-powered framework for deep document understanding, semantic retrieval, and context-aware answers using RAG paradigm.

  7. spipm / Depixelization_poc

    Depix is a PoC for a technique to recover plaintext from pixelized screenshots.

  8. YimMenu / YimMenuV2

    Experimental menu for GTA 5: Enhanced

  9. tursodatabase / turso

    Turso is an in-process SQL database, compatible with SQLite.

  10. langgenius / dify

    Production-ready platform for agentic workflow development.

  11. datawhalechina / hello-agents

    📚 《从零开始构建智能体》——从零开始的智能体原理与实践教程

  12. agentsmd / agents.md

    AGENTS.md — a simple, open format for guiding coding agents

  13. shadcn-ui / ui

    A set of beautifully-designed, accessible components and a code distribution platform. Works with your favorite frameworks. Open Source. Open Code.

  14. karpathy / nanoGPT

    The simplest, fastest repository for training/finetuning medium-sized GPTs.

  15. ChromeDevTools / chrome-devtools-mcp

    Chrome DevTools for coding agents

Hugging Face(15)

  1. T-pro 2.0: An Efficient Russian Hybrid-Reasoning Model and Playground

    We introduce T-pro 2.0, an open-weight Russian LLM for hybrid reasoning and efficient inference. The model supports direct answering and reasoning-trace generation, using a Cyrillic-dense tokenizer and an adapted EAGLE speculative-decoding pipeline to reduce latency. To enable reproducible and extensible research, we release the model weights, the T-Wix 500k instruction corpus, the T-Math reasoning benchmark, and the EAGLE weights on Hugging Face. These resources allow users to study Russian-language reasoning and to extend or adapt both the model and the inference pipeline. A public web demo exposes reasoning and non-reasoning modes and illustrates the speedups achieved by our inference stack across domains. T-pro 2.0 thus serves as an accessible open system for building and evaluating efficient, practical Russian LLM applications.

  2. Long-horizon Reasoning Agent for Olympiad-Level Mathematical Problem Solving

    Large language models (LLMs) have achieved significant progress in solving complex reasoning tasks by Reinforcement Learning with Verifiable Rewards (RLVR). This advancement is also inseparable from the oversight automated by reliable verifiers. However, current outcome-based verifiers (OVs) are unable to inspect the unreliable intermediate steps in the long reasoning chains of thought (CoTs). Meanwhile, current process-based verifiers (PVs) have difficulties in reliably detecting errors in the complex long CoTs, limited by the scarcity of high-quality annotations due to the prohibitive costs of human annotations. Therefore, we propose the Outcome-based Process Verifier (OPV), which verifies the rationale process of summarized outcomes from long CoTs to achieve both accurate and efficient verification and enable large-scale annotation. To empower the proposed verifier, we adopt an iterative active learning framework with expert annotations to progressively improve the verification capability of OPV with fewer annotation costs. Specifically, in each iteration, the most uncertain cases of the current best OPV are annotated and then subsequently used to train a new OPV through Rejection Fine-Tuning (RFT) and RLVR for the next round. Extensive experiments demonstrate OPV's superior performance and broad applicability. It achieves new state-of-the-art results on our held-out \thisbench, outperforming much larger open-source models such as Qwen3-Max-Preview with an F1 score of 83.1 compared to 76.3. Furthermore, OPV effectively detects false positives within synthetic dataset, closely align with expert assessment. When collaborating with policy models, OPV consistently yields performance gains, e.g., raising the accuracy of DeepSeek-R1-Distill-Qwen-32B from 55.2\% to 73.3\% on AIME2025 as the compute budget scales.

  3. Are We Ready for RL in Text-to-3D Generation? A Progressive Investigation

    Reinforcement learning (RL), earlier proven to be effective in large language and multi-modal models, has been successfully extended to enhance 2D image generation recently. However, applying RL to 3D generation remains largely unexplored due to the higher spatial complexity of 3D objects, which require globally consistent geometry and fine-grained local textures. This makes 3D generation significantly sensitive to reward designs and RL algorithms. To address these challenges, we conduct the first systematic study of RL for text-to-3D autoregressive generation across several dimensions. (1) Reward designs: We evaluate reward dimensions and model choices, showing that alignment with human preference is crucial, and that general multi-modal models provide robust signal for 3D attributes. (2) RL algorithms: We study GRPO variants, highlighting the effectiveness of token-level optimization, and further investigate the scaling of training data and iterations. (3) Text-to-3D Benchmarks: Since existing benchmarks fail to measure implicit reasoning abilities in 3D generation models, we introduce MME-3DR. (4) Advanced RL paradigms: Motivated by the natural hierarchy of 3D generation, we propose Hi-GRPO, which optimizes the global-to-local hierarchical 3D generation through dedicated reward ensembles. Based on these insights, we develop AR3D-R1, the first RL-enhanced text-to-3D model, expert from coarse shape to texture refinement. We hope this study provides insights into RL-driven reasoning for 3D generation. Code is released at https://github.com/Ivan-Tang-3D/3DGen-R1.

  4. OPV: Outcome-based Process Verifier for Efficient Long Chain-of-Thought Verification

    Large language models (LLMs) have achieved significant progress in solving complex reasoning tasks by Reinforcement Learning with Verifiable Rewards (RLVR). This advancement is also inseparable from the oversight automated by reliable verifiers. However, current outcome-based verifiers (OVs) are unable to inspect the unreliable intermediate steps in the long reasoning chains of thought (CoTs). Meanwhile, current process-based verifiers (PVs) have difficulties in reliably detecting errors in the complex long CoTs, limited by the scarcity of high-quality annotations due to the prohibitive costs of human annotations. Therefore, we propose the Outcome-based Process Verifier (OPV), which verifies the rationale process of summarized outcomes from long CoTs to achieve both accurate and efficient verification and enable large-scale annotation. To empower the proposed verifier, we adopt an iterative active learning framework with expert annotations to progressively improve the verification capability of OPV with fewer annotation costs. Specifically, in each iteration, the most uncertain cases of the current best OPV are annotated and then subsequently used to train a new OPV through Rejection Fine-Tuning (RFT) and RLVR for the next round. Extensive experiments demonstrate OPV's superior performance and broad applicability. It achieves new state-of-the-art results on our held-out OPV-Bench, outperforming much larger open-source models such as Qwen3-Max-Preview with an F1 score of 83.1 compared to 76.3. Furthermore, OPV effectively detects false positives within synthetic dataset, closely align with expert assessment. When collaborating with policy models, OPV consistently yields performance gains, e.g., raising the accuracy of DeepSeek-R1-Distill-Qwen-32B from 55.2% to 73.3% on AIME2025 as the compute budget scales.

  5. Achieving Olympia-Level Geometry Large Language Model Agent via Complexity Boosting Reinforcement Learning

    Large language model (LLM) agents exhibit strong mathematical problem-solving abilities and can even solve International Mathematical Olympiad (IMO) level problems with the assistance of formal proof systems. However, due to weak heuristics for auxiliary constructions, AI for geometry problem solving remains dominated by expert models such as AlphaGeometry 2, which rely heavily on large-scale data synthesis and search for both training and evaluation. In this work, we make the first attempt to build a medalist-level LLM agent for geometry and present InternGeometry. InternGeometry overcomes the heuristic limitations in geometry by iteratively proposing propositions and auxiliary constructions, verifying them with a symbolic engine, and reflecting on the engine's feedback to guide subsequent proposals. A dynamic memory mechanism enables InternGeometry to conduct more than two hundred interactions with the symbolic engine per problem. To further accelerate learning, we introduce Complexity-Boosting Reinforcement Learning (CBRL), which gradually increases the complexity of synthesized problems across training stages. Built on InternThinker-32B, InternGeometry solves 44 of 50 IMO geometry problems (2000-2024), exceeding the average gold medalist score (40.9), using only 13K training examples, just 0.004% of the data used by AlphaGeometry 2, demonstrating the potential of LLM agents on expert-level geometry tasks. InternGeometry can also propose novel auxiliary constructions for IMO problems that do not appear in human solutions. We will release the model, data, and symbolic engine to support future research.

  6. MoCapAnything: Unified 3D Motion Capture for Arbitrary Skeletons from Monocular Videos

    Motion capture now underpins content creation far beyond digital humans, yet most existing pipelines remain species- or template-specific. We formalize this gap as Category-Agnostic Motion Capture (CAMoCap): given a monocular video and an arbitrary rigged 3D asset as a prompt, the goal is to reconstruct a rotation-based animation such as BVH that directly drives the specific asset. We present MoCapAnything, a reference-guided, factorized framework that first predicts 3D joint trajectories and then recovers asset-specific rotations via constraint-aware inverse kinematics. The system contains three learnable modules and a lightweight IK stage: (1) a Reference Prompt Encoder that extracts per-joint queries from the asset's skeleton, mesh, and rendered images; (2) a Video Feature Extractor that computes dense visual descriptors and reconstructs a coarse 4D deforming mesh to bridge the gap between video and joint space; and (3) a Unified Motion Decoder that fuses these cues to produce temporally coherent trajectories. We also curate Truebones Zoo with 1038 motion clips, each providing a standardized skeleton-mesh-render triad. Experiments on both in-domain benchmarks and in-the-wild videos show that MoCapAnything delivers high-quality skeletal animations and exhibits meaningful cross-species retargeting across heterogeneous rigs, enabling scalable, prompt-driven 3D motion capture for arbitrary assets. Project page: https://animotionlab.github.io/MoCapAnything/

  7. BEAVER: An Efficient Deterministic LLM Verifier

    As large language models (LLMs) transition from research prototypes to production systems, practitioners often need reliable methods to verify that model outputs satisfy required constraints. While sampling-based estimates provide an intuition of model behavior, they offer no sound guarantees. We present BEAVER, the first practical framework for computing deterministic, sound probability bounds on LLM constraint satisfaction. Given any prefix-closed semantic constraint, BEAVER systematically explores the generation space using novel token trie and frontier data structures, maintaining provably sound bounds at every iteration. We formalize the verification problem, prove soundness of our approach, and evaluate BEAVER on correctness verification, privacy verification and secure code generation tasks across multiple state of the art LLMs. BEAVER achieves 6 to 8 times tighter probability bounds and identifies 3 to 4 times more high risk instances compared to baseline methods under identical computational budgets, enabling precise characterization and risk assessment that loose bounds or empirical evaluation cannot provide.

  8. Thinking with Images via Self-Calling Agent

    Thinking-with-images paradigms have showcased remarkable visual reasoning capability by integrating visual information as dynamic elements into the Chain-of-Thought (CoT). However, optimizing interleaved multimodal CoT (iMCoT) through reinforcement learning remains challenging, as it relies on scarce high-quality reasoning data. In this study, we propose Self-Calling Chain-of-Thought (sCoT), a novel visual reasoning paradigm that reformulates iMCoT as a language-only CoT with self-calling. Specifically, a main agent decomposes the complex visual reasoning task to atomic subtasks and invokes its virtual replicas, i.e. parameter-sharing subagents, to solve them in isolated context. sCoT enjoys substantial training effectiveness and efficiency, as it requires no explicit interleaving between modalities. sCoT employs group-relative policy optimization to reinforce effective reasoning behavior to enhance optimization. Experiments on HR-Bench 4K show that sCoT improves the overall reasoning performance by up to 1.9% with sim 75% fewer GPU hours compared to strong baseline approaches. Code is available at https://github.com/YWenxi/think-with-images-through-self-calling.

  9. From Macro to Micro: Benchmarking Microscopic Spatial Intelligence on Molecules via Vision-Language Models

    This paper introduces the concept of Microscopic Spatial Intelligence (MiSI), the capability to perceive and reason about the spatial relationships of invisible microscopic entities, which is fundamental to scientific discovery. To assess the potential of Vision-Language Models (VLMs) in this domain, we propose a systematic benchmark framework MiSI-Bench. This framework features over 163,000 question-answer pairs and 587,000 images derived from approximately 4,000 molecular structures, covering nine complementary tasks that evaluate abilities ranging from elementary spatial transformations to complex relational identifications. Experimental results reveal that current state-of-the-art VLMs perform significantly below human level on this benchmark. However, a fine-tuned 7B model demonstrates substantial potential, even surpassing humans in spatial transformation tasks, while its poor performance in scientifically-grounded tasks like hydrogen bond recognition underscores the necessity of integrating explicit domain knowledge for progress toward scientific AGI. The datasets are available at https://huggingface.co/datasets/zongzhao/MiSI-bench.

  10. VQRAE: Representation Quantization Autoencoders for Multimodal Understanding, Generation and Reconstruction

    Unifying multimodal understanding, generation and reconstruction representation in a single tokenizer remains a key challenge in building unified models. Previous research predominantly attempts to address this in a dual encoder paradigm, e.g., utilizing the separate encoders for understanding and generation respectively or balancing semantic representations and low-level features with contrastive loss. In this paper, we propose VQRAE, a Vector Quantization version of Representation AutoEncoders, which pioneers the first exploration in unified representation to produce Continuous semantic features for image understanding and Discrete tokens for visual generation within a unified tokenizer. Specifically, we build upon pretrained vision foundation models with a symmetric ViT decoder and adopt a two-stage training strategy: first, it freezes the encoder and learns a high-dimensional semantic VQ codebook with pixel reconstruction objective; then jointly optimizes the encoder with self-distillation constraints. This design enables negligible semantic information for maintaining the ability of multimodal understanding, discrete tokens that are compatible for generation and fine-grained reconstruction. Besides, we identify the intriguing property in quantizing semantic encoders that rely on high-dimensional codebook in contrast to the previous common practice of low-dimensional codebook in image reconstruction. The semantic VQ codebook can achieve a 100% utilization ratio at a dimension of 1536. VQRAE presents competitive performance on several benchmarks of visual understanding, generation and reconstruction with promising scaling property in the autoregressive paradigm for its discrete merits.

  11. Stronger Normalization-Free Transformers

    Although normalization layers have long been viewed as indispensable components of deep learning architectures, the recent introduction of Dynamic Tanh (DyT) has demonstrated that alternatives are possible. The point-wise function DyT constrains extreme values for stable convergence and reaches normalization-level performance; this work seeks further for function designs that can surpass it. We first study how the intrinsic properties of point-wise functions influence training and performance. Building on these findings, we conduct a large-scale search for a more effective function design. Through this exploration, we introduce Derf(x) = erf(αx + s), where erf(x) is the rescaled Gaussian cumulative distribution function, and identify it as the most performant design. Derf outperforms LayerNorm, RMSNorm, and DyT across a wide range of domains, including vision (image recognition and generation), speech representation, and DNA sequence modeling. Our findings suggest that the performance gains of Derf largely stem from its improved generalization rather than stronger fitting capacity. Its simplicity and stronger performance make Derf a practical choice for normalization-free Transformer architectures.

  12. StereoSpace: Depth-Free Synthesis of Stereo Geometry via End-to-End Diffusion in a Canonical Space

    We introduce StereoSpace, a diffusion-based framework for monocular-to-stereo synthesis that models geometry purely through viewpoint conditioning, without explicit depth or warping. A canonical rectified space and the conditioning guide the generator to infer correspondences and fill disocclusions end-to-end. To ensure fair and leakage-free evaluation, we introduce an end-to-end protocol that excludes any ground truth or proxy geometry estimates at test time. The protocol emphasizes metrics reflecting downstream relevance: iSQoE for perceptual comfort and MEt3R for geometric consistency. StereoSpace surpasses other methods from the warp & inpaint, latent-warping, and warped-conditioning categories, achieving sharp parallax and strong robustness on layered and non-Lambertian scenes. This establishes viewpoint-conditioned diffusion as a scalable, depth-free solution for stereo generation.

  13. Evaluating Gemini Robotics Policies in a Veo World Simulator

    Generative world models hold significant potential for simulating interactions with visuomotor policies in varied environments. Frontier video models can enable generation of realistic observations and environment interactions in a scalable and general manner. However, the use of video models in robotics has been limited primarily to in-distribution evaluations, i.e., scenarios that are similar to ones used to train the policy or fine-tune the base video model. In this report, we demonstrate that video models can be used for the entire spectrum of policy evaluation use cases in robotics: from assessing nominal performance to out-of-distribution (OOD) generalization, and probing physical and semantic safety. We introduce a generative evaluation system built upon a frontier video foundation model (Veo). The system is optimized to support robot action conditioning and multi-view consistency, while integrating generative image-editing and multi-view completion to synthesize realistic variations of real-world scenes along multiple axes of generalization. We demonstrate that the system preserves the base capabilities of the video model to enable accurate simulation of scenes that have been edited to include novel interaction objects, novel visual backgrounds, and novel distractor objects. This fidelity enables accurately predicting the relative performance of different policies in both nominal and OOD conditions, determining the relative impact of different axes of generalization on policy performance, and performing red teaming of policies to expose behaviors that violate physical or semantic safety constraints. We validate these capabilities through 1600+ real-world evaluations of eight Gemini Robotics policy checkpoints and five tasks for a bimanual manipulator.

  14. MoRel: Long-Range Flicker-Free 4D Motion Modeling via Anchor Relay-based Bidirectional Blending with Hierarchical Densification

    Recent advances in 4D Gaussian Splatting (4DGS) have extended the high-speed rendering capability of 3D Gaussian Splatting (3DGS) into the temporal domain, enabling real-time rendering of dynamic scenes. However, one of the major remaining challenges lies in modeling long-range motion-contained dynamic videos, where a naive extension of existing methods leads to severe memory explosion, temporal flickering, and failure to handle appearing or disappearing occlusions over time. To address these challenges, we propose a novel 4DGS framework characterized by an Anchor Relay-based Bidirectional Blending (ARBB) mechanism, named MoRel, which enables temporally consistent and memory-efficient modeling of long-range dynamic scenes. Our method progressively constructs locally canonical anchor spaces at key-frame time index and models inter-frame deformations at the anchor level, enhancing temporal coherence. By learning bidirectional deformations between KfA and adaptively blending them through learnable opacity control, our approach mitigates temporal discontinuities and flickering artifacts. We further introduce a Feature-variance-guided Hierarchical Densification (FHD) scheme that effectively densifies KfA's while keeping rendering quality, based on an assigned level of feature-variance. To effectively evaluate our model's capability to handle real-world long-range 4D motion, we newly compose long-range 4D motion-contained dataset, called SelfCap_{LR}. It has larger average dynamic motion magnitude, captured at spatially wider spaces, compared to previous dynamic video datasets. Overall, our MoRel achieves temporally coherent and flicker-free long-range 4D reconstruction while maintaining bounded memory usage, demonstrating both scalability and efficiency in dynamic Gaussian-based representations.

  15. ReViSE: Towards Reason-Informed Video Editing in Unified Models with Self-Reflective Learning

    Video unified models exhibit strong capabilities in understanding and generation, yet they struggle with reason-informed visual editing even when equipped with powerful internal vision-language models (VLMs). We attribute this gap to two factors: 1) existing datasets are inadequate for training and evaluating reasoning-aware video editing, and 2) an inherent disconnect between the models' reasoning and editing capabilities, which prevents the rich understanding from effectively instructing the editing process. Bridging this gap requires an integrated framework that connects reasoning with visual transformation. To address this gap, we introduce the Reason-Informed Video Editing (RVE) task, which requires reasoning about physical plausibility and causal dynamics during editing. To support systematic evaluation, we construct RVE-Bench, a comprehensive benchmark with two complementary subsets: Reasoning-Informed Video Editing and In-Context Video Generation. These subsets cover diverse reasoning dimensions and real-world editing scenarios. Building upon this foundation, we propose the ReViSE, a Self-Reflective Reasoning (SRF) framework that unifies generation and evaluation within a single architecture. The model's internal VLM provides intrinsic feedback by assessing whether the edited video logically satisfies the given instruction. The differential feedback that refines the generator's reasoning behavior during training. Extensive experiments on RVE-Bench demonstrate that ReViSE significantly enhances editing accuracy and visual fidelity, achieving a 32% improvement of the Overall score in the reasoning-informed video editing subset over state-of-the-art methods.

Solidot(15)

  1. 中国卫星与一颗 Starlink 卫星差点相撞

    12 月 10 日中科宇航力箭一号遥十一运载火箭在东风发射场成功将 9 颗卫星送入轨道。力箭一号共搭载了阿联酋813 卫星、吉星高分07B01星、吉星高分07C01星、吉星高分07D01星、东坡15号卫星、驭星二号09星、逸仙-A星、SPNEX卫星、Slippers2Sat 卫星 9 颗卫星。SpaceX 周五披露其中一颗卫星与 Starlink 卫星 STARLINK-6079(56120)差点相撞,双方仅相距 200 米。Starlink 工程副总裁 Michael Nicolls 对这次发射没有提前与在轨卫星进行协调表示了不满。中科宇航表示正在调查这起事件。主要是因为 SpaceX,目前轨道上的卫星越来越多,2020 年轨道上正常运行的卫星不到 3400 颗,如今数量超过了 13000 颗,大部分是 SpaceX 的 Starlink 宽带卫星——数量多达 9300 颗,SpaceX 仅今年就发射了逾 3000 颗。

  2. 中国在近九成的关键技术领域处于领先地位

    根据智库澳洲战略政策研究所(Australian Strategic Policy Institute,ASPI)的报告,中国在近九成的关键技术领域处于领先地位。ASPI 评估了 74 项当前和新兴技术领域的研究,中国在核能、合成生物学、小型卫星等 66 项技术的研究上排名第一,美国在量子计算和地球工程等 8 项技术的研究上排名第一。结果显示中美的技术优势出现了显著的反转:21 世纪初美国在九成的技术领域位居第一,而中国则在不到 5% 的领域有优势。ASPI 分析了包含逾 900 万份出版物的数据库,根据过去五年引用次数前 10% 的论文署名作者的国别进行排名。苏州西交利物浦大学政治经济学 Steven Hai 认为这一分析不应被解读为美国实力的坍陷。

  3. 美国汽车联盟敦促政府阻止中国汽车厂商在美建厂

    由通用汽车、福特、丰田、大众、现代、斯特兰蒂斯等主要汽车厂商组成的产业联盟敦促美国政府阻止中国汽车厂商和电池制造商在美建厂,称中国对美国汽车行业构成了“明确而现实的威胁”。汽车产业联盟呼吁国会议员维持禁止从中国进口 IT 技术及服务的禁令——该禁令事实上禁止从中国厂商进口汽车。该组织称,美国汽车制造商和电池生产商在国内的投资,无论规模多大,都无法抵消中国通过补贴支持企业在全球长期过度供应的影响。这种过度供应可能导致倾销,国会和特朗普政府必须防止这种情况在美国市场发生。

  4. 俄罗斯勒索软件组织用明文储存主密钥

    亲俄罗斯黑客组织 Cyber​​Volk 在沉寂数月之后推出了基于 Telegram 的勒索软件即服务 CyberVolk 2.x(aka VolkLocker)。基于 Telegram 的服务降低了准入门槛,但好消息是开发者在测试程序时失误,导致主密钥硬编码在可执行文件中。这意味着受害者无需支付赎金就能解密被加密的文件。VolkLocker 不会动态生成加密密钥,硬编码的主密钥以明文写入 %TEMP% 文件夹。勒索软件被发现使用 AES-256-GCM(Galois/Counter Mode)对文件进行加密。

  5. 好莱坞导演骗取 Netflix 1100 万美元投资加密货币和购买豪车

    好莱坞导演 Carl Rinsch 以执导广告知名,他的首部电影处女作是 2013 年上映的《四十七浪人》,主演包括了基努·里维斯和真田广之。这部电影投资 1.75 亿美元但票房只有 1.5 亿美元,是当年亏损金额最高的电影之一,Rinsch 之后回归执导广告。他与其妻子策划了一部以有机智能为主题的科幻剧集,其创意受到了流媒体公司的青睐,Netflix 获得了版权,同意投资 6120 万美元制作名为《Conquest》的剧集。但《Conquest》的制作过程并不顺利,Netflix 在 2021 年放弃了这部剧集。Rinsch 被控挪用了 1100 万美元,将这些资金转移到个人证券账户,且在两个月内因证券投资亏损逾半。他之后开始投机购买加密货币狗币(Dogecoin),2021 年 5 月套现,获利 2300 万美元,显著改善了个人财务状况。他随后花费 240 万美元购买了五辆劳斯莱斯和一辆法拉利,花 330 万美元购买了家具和古董,38.7 万美元购买了一块瑞士手表。Netflix 已经勾销了 5500 万美元坏账,没有追回任何款项。本周纽约南区法院陪审团裁定这位 48 岁的导演七项罪名成立,他将面临最高 90 年监禁,其判决将于 2026 年 4 月 17 日宣布。

  6. 科学家绘制全球 97% 建筑物的 3D 地图

    科学家绘制了全球 97% 建筑物的 3D 地图。地图 GlobalBuildingAtlas 发布在 GitHub 上,采用 MIT 许可证带公地条款限制(禁止商业出售)。数据集涵盖了 27.5 亿栋建筑物,以 3 米× 3 米的空间分辨率绘制了每栋建筑物的轮廓和高度,该地图可用于灾害风险评估、气候建模和城市规划。研究人员利用深度学习工具,基于 2019 年拍摄的约 80 万张卫星影像,创建了 3D 地图。研究发现,亚洲建筑物数量几乎占全球所有已测绘建筑物的一半,约 12.2 亿栋。亚洲建筑物总体积也位居全球之首,达到 1.27 万亿立方米,这一结果反映了中国、印度和东南亚地区快速城市化和密集的都市区。非洲建筑物数量位居第二,达 5.4 亿栋,但其总体积仅为 1170 亿立方米,以小型低层建筑为主。芬兰人均建筑物体积是希腊的六倍,尼日尔的人均建筑物体积为世界平均水平的 1/27。

  7. Reddit 指控澳大利亚禁止儿童使用社媒法侵犯自由

    Reddit 周五就澳大利亚禁止 16 岁以下青少年使用社交媒体的法律提起诉讼。澳大利亚的这项法律堪称全球首例,已于周三生效,总部位于美国的 Reddit 也在被禁的名单上。Reddit 向澳大利亚高等法院提出对该案复审要求,认为自己作为讨论论坛,不应写入被禁止的社交媒体名单上。在提交给法院的诉讼文件中,Reddit 质疑该法律的合法性,称其“侵犯了政治表达的自由”。Reddit 表示,它同意应该保护16岁以下的青少年儿童。但该法律强势介入,加上可能不安全的验证程序,会使年龄较大的青少年和年轻成年人,失去参与同龄人活动的能力,包括政治讨论。这份声明中还称,“与其他受该法律约束的平台不同,Reddit 绝大多数用户都是成年人,我们不会针对 18 岁以下的儿童进行营销或投放广告。”“简而言之,16 岁以下的用户并非 Reddit 的主要市场群体,我们也无意让他们成为主要市场群体。”

  8. 蟒蛇在一千万年前就体型巨大

    剑桥大学领导的研究团队根据委内瑞拉发现的距今 1240 万年的化石重建了古代水蟒(anacondas),发现这种热带蛇类体长达 5.2 米。1240 万年到 530 万年前的中新世中晚期,由于更温暖的全球气温,广阔的湿地和丰富的食物,很多动物体型远大于其现代近亲。中新世巨型动物如 12 米长的凯门鳄(Purussaurus),3.2 米长的巨型淡水龟(Stupendemys)都已经灭绝,但水蟒作为一种巨型动物仍然存活了下来。研究团队测量了 183 块水蟒脊椎骨化石,它们来自至少 32 条蟒蛇。结果显示古代水蟒体长约为 4-5 米,与现代的水蟒差不多。研究人员认为这展现了蟒蛇超强的适应能力。

  9. 瑞士考虑将人口上限设置在 1000 万

    极右翼政党支持率的不断攀升施压欧洲各国政府加强移民管控。瑞士即将就一项提案进行投票,该提案把移民管控推向了新的高度——设定人口上限。如果瑞士居民人数从目前的约 900 万增至 1000 万以上,那么根据该提案瑞士可能全面禁止新移民入境,不管难民、技术工人还是年薪六位数的高级经理都一视同仁。根据瑞士的全民公投政策,公民预计会在明年就该提案进行投票表决。而民调显示该提案很可能会获得批准。限制人口流动从来不会有利于经济发展,全面限制新移民被认为可能会导致瑞士关键技能人才缺乏,损害国家竞争力。投票结果将展示公民为了维护国家的吸引力而愿意做出怎样的选择。右翼的瑞士人民党(Swiss People's Party)在上次选举中赢得了 28% 的选票,其竞选纲领将瑞士公民身份刻画成一种特权而非权利。该党在 2023 年提出了限制人口数量的设想,将其包装成一种保护瑞士生活方式和防止环境受到过度人类活动破坏的方法。

  10. 2024 年自由软件奖宣布

    自由软件基金会(FSF)在 2025 年底宣布了 2024 年度的自由软件奖得主。社会公益项目奖授予了众包事实核查政府地址、电话号码、网站和社交媒体账号的服务 Govdirectory,此前获得该奖的项目包括了 OpenStreetMap、Public Lab 和 Let's Encryp。杰出新自由软件贡献者奖授予了 GIMP 的贡献者 Alx Sa。自由软件进步奖授予了资深开发者 Andy Wingo,他是 GNU Guile 项目的维护者之一。

  11. 迪士尼与 Open AI 展开合作

    迪士尼改变了其对于 AI 公司使用其版权角色的立场,宣布与 Open AI 达成合作,向 Open AI 投资 10 亿美元,获得额外认股权证的权利。作为协议的一部分,迪士尼允许 Open AI 使用其逾 200 个版权角色制作短视频和图像,这些角色来自迪士尼、漫威、星球大战和皮克斯。新功能预计将于 2026 年通过 OpenAI 的视频生成平台 Sora 和 ChatGPT 推出。部分用户创作的短视频也将会在 Disney+ 上推出。协议不包含任何角色肖像或声音的使用权。迪士尼的员工也将可以使用 OpenAI 工具构建新产品。

  12. 日本年度汉字“熊”

    日本汉字能力检定协会宣布,“熊”字当选最能反映今年世态民情的年度汉字。理由是各地接连有熊出没并造成了损失。在京都市的清水寺,住持森清范在长约 1.5 米、宽约 1.3 米的和纸上挥毫写下“熊”字。评选年度汉字始于 1995 年,今年是第 31 次。协会通过官网和明信片接受来自全国的应征,然后选出得票最多的汉字。

  13. 教授用秘密摄像头发现博士生破坏另一位博士生的电脑

    加州伯克利的一位教授感觉不对劲,学校因计算机损坏损失了 46,855 美元,而几乎所有损坏都与一名博士生相关。是这位博士生运气太差还是另有隐情?在征得大楼管理员的同意后,教授在一台笔记本电脑上秘密安装了一个摄像头,摄像头对准了这位博士生的计算机。摄像头拍下了另一名博士生、26 岁的 Jiarui Zou 使用工具破坏该生笔记本电脑的镜头。Jiarui Zou 因在 11 月 9 日至 10 日期间破坏三台电脑而被控三项故意破坏财物罪。他每次造成的损失都超过 400 美元。Zou 还被怀疑与多年来持续发生的其它破坏事件有关。Zou 于 11 月 12 日在宿舍楼被捕,目前已获释,将于 12 月 15 日首次出庭。

  14. 思科股价超过互联网泡沫期间创下的峰值记录

    网络巨头思科股价周三达到了 80.25 美元,终于超过了 25 年前互联网泡沫(dotcom-era)期间创下的 80.06 美元峰值记录。互联网泡沫期思科的市值曾在短时间内超过微软成为全世界市值最高的公司。思科股价的复苏之路花了 25 年 8 个月零 13 天。思科的基本面在此期间有显著改善:自 1999 年以来,营收增长近五倍,利润增长四倍,每股收益增长八倍,利润率保持在健康水平。但在峰值期间买入思科股票的投资者在长达 25 年里还是因通货膨胀而蒙受损失。思科的轨迹与英伟达有相似之处,作为最大的 AI 芯片供应商英伟达因 AI 热而一跃成为全世界市值最高的企业,其市盈率超过 45 倍,企业价值接近 24 倍销售额。思科在 2000 年市盈率超过 200 倍,企业价值 31 倍销售额。

  15. Pop!_OS 24.04 LTS 释出

    Linux PC 制造商 System76 开发的发行版 Pop!_OS 在延期了一年半之后终于释出了 Pop!_OS 24.04 LTS,它同时发布了用 Rust 语言重写的桌面环境 COSMIC。Pop!_OS 是基于 Ubuntu LTS 版,原计划与 Ubuntu LTS 发布步伐保持一致,但因为 COSMIC 开发进度滞后而延期。Pop!_OS 24.04 LTS 的主要变化包括:支持 ARM 架构的计算机;新的混合显卡支持,用户可以右键应用选择运行在独显还是集显上,需要独显运行的应用会自动运行在独显上;更好的硬件支持;等等。