DIGEST · 2025-12-12

OrangeBot.AI Digest — 2025-12-12

59 headlines across 8 sources, aggregated for this day.

Hacker News(15)

  1. macOS 26.2 enables fast AI clusters with RDMA over Thunderbolt (developer.apple.com)
  2. Benn Jordan’s flock camera jammer will send you to jail in Florida now [video] (www.youtube.com)
  3. Home Depot GitHub token exposed for a year, granted access to internal systems (techcrunch.com)
  4. Id Software devs form "wall-to-wall" union (www.rockpapershotgun.com)
  5. Google releases its new Google Sans Flex font as open source (www.omgubuntu.co.uk)
  6. CM0 – A new Raspberry Pi you can't buy (www.jeffgeerling.com)
  7. Epic celebrates "the end of the Apple Tax" after court win in iOS payments case (arstechnica.com)
  8. Framework Raises DDR5 Memory Prices by 50% for DIY Laptops (www.phoronix.com)
  9. Berlin Approves New Expansion of Police Surveillance Powers (reclaimthenet.org)
  10. SQLite JSON at full index speed using generated columns (www.dbpro.app)
  11. The Tor Project is switching to Rust (itsfoss.com)
  12. Koralm Railway (infrastruktur.oebb.at)
  13. 4 billion if statements (2023) (andreasjhkarlsson.github.io)
  14. Young journalists expose Russian-linked vessels off the Dutch and German coast (www.digitaldigging.org)
  15. Guarding My Git Forge Against AI Scrapers (vulpinecitrus.info)

GitHub Trending(14)

  1. thedotmack / claude-mem

    A Claude Code plugin that automatically captures everything Claude does during your coding sessions, compresses it with AI (using Claude's agent-sdk), and injects relevant context back into future sessions.

  2. block / goose

    an open source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM

  3. datawhalechina / hello-agents

    📚 《从零开始构建智能体》——从零开始的智能体原理与实践教程

  4. agentsmd / agents.md

    AGENTS.md — a simple, open format for guiding coding agents

  5. GoogleCloudPlatform / agent-starter-pack

    Ship AI Agents to Google Cloud in minutes, not months. Production-ready templates with built-in CI/CD, evaluation, and observability.

  6. YimMenu / YimMenuV2

    Experimental menu for GTA 5: Enhanced

  7. refly-ai / refly

    Vibe Workflow Platform for Non-technical Creators.

  8. HotCakeX / Harden-Windows-Security

    Harden Windows Safely, Securely using Official Supported Microsoft methods and proper explanation | Always up-to-date and works with the latest build of Windows | Provides tools and Guides for Personal, Enterprise, Government and Military security levels | SLSA Level 3 Compliant for Secure Development and Build Process | Apps Available on MS Store✨

  9. DayuanJiang / next-ai-draw-io

    A next.js web application that integrates AI capabilities with draw.io diagrams. This app allows you to create, modify, and enhance diagrams through natural language commands and AI-assisted visualization.

  10. tursodatabase / turso

    Turso is an in-process SQL database, compatible with SQLite.

  11. langgenius / dify

    Production-ready platform for agentic workflow development.

  12. tempoxyz / tempo

    the blockchain for payments

  13. infiniflow / ragflow

    RAGFlow is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs

  14. grpc / grpc-go

    The Go language implementation of gRPC. HTTP/2 based RPC

Hugging Face(15)

  1. T-pro 2.0: An Efficient Russian Hybrid-Reasoning Model and Playground

    We introduce T-pro 2.0, an open-weight Russian LLM for hybrid reasoning and efficient inference. The model supports direct answering and reasoning-trace generation, using a Cyrillic-dense tokenizer and an adapted EAGLE speculative-decoding pipeline to reduce latency. To enable reproducible and extensible research, we release the model weights, the T-Wix 500k instruction corpus, the T-Math reasoning benchmark, and the EAGLE weights on Hugging Face. These resources allow users to study Russian-language reasoning and to extend or adapt both the model and the inference pipeline. A public web demo exposes reasoning and non-reasoning modes and illustrates the speedups achieved by our inference stack across domains. T-pro 2.0 thus serves as an accessible open system for building and evaluating efficient, practical Russian LLM applications.

  2. Long-horizon Reasoning Agent for Olympiad-Level Mathematical Problem Solving

    Large language models (LLMs) have achieved significant progress in solving complex reasoning tasks by Reinforcement Learning with Verifiable Rewards (RLVR). This advancement is also inseparable from the oversight automated by reliable verifiers. However, current outcome-based verifiers (OVs) are unable to inspect the unreliable intermediate steps in the long reasoning chains of thought (CoTs). Meanwhile, current process-based verifiers (PVs) have difficulties in reliably detecting errors in the complex long CoTs, limited by the scarcity of high-quality annotations due to the prohibitive costs of human annotations. Therefore, we propose the Outcome-based Process Verifier (OPV), which verifies the rationale process of summarized outcomes from long CoTs to achieve both accurate and efficient verification and enable large-scale annotation. To empower the proposed verifier, we adopt an iterative active learning framework with expert annotations to progressively improve the verification capability of OPV with fewer annotation costs. Specifically, in each iteration, the most uncertain cases of the current best OPV are annotated and then subsequently used to train a new OPV through Rejection Fine-Tuning (RFT) and RLVR for the next round. Extensive experiments demonstrate OPV's superior performance and broad applicability. It achieves new state-of-the-art results on our held-out \thisbench, outperforming much larger open-source models such as Qwen3-Max-Preview with an F1 score of 83.1 compared to 76.3. Furthermore, OPV effectively detects false positives within synthetic dataset, closely align with expert assessment. When collaborating with policy models, OPV consistently yields performance gains, e.g., raising the accuracy of DeepSeek-R1-Distill-Qwen-32B from 55.2\% to 73.3\% on AIME2025 as the compute budget scales.

  3. Are We Ready for RL in Text-to-3D Generation? A Progressive Investigation

    Reinforcement learning (RL), earlier proven to be effective in large language and multi-modal models, has been successfully extended to enhance 2D image generation recently. However, applying RL to 3D generation remains largely unexplored due to the higher spatial complexity of 3D objects, which require globally consistent geometry and fine-grained local textures. This makes 3D generation significantly sensitive to reward designs and RL algorithms. To address these challenges, we conduct the first systematic study of RL for text-to-3D autoregressive generation across several dimensions. (1) Reward designs: We evaluate reward dimensions and model choices, showing that alignment with human preference is crucial, and that general multi-modal models provide robust signal for 3D attributes. (2) RL algorithms: We study GRPO variants, highlighting the effectiveness of token-level optimization, and further investigate the scaling of training data and iterations. (3) Text-to-3D Benchmarks: Since existing benchmarks fail to measure implicit reasoning abilities in 3D generation models, we introduce MME-3DR. (4) Advanced RL paradigms: Motivated by the natural hierarchy of 3D generation, we propose Hi-GRPO, which optimizes the global-to-local hierarchical 3D generation through dedicated reward ensembles. Based on these insights, we develop AR3D-R1, the first RL-enhanced text-to-3D model, expert from coarse shape to texture refinement. We hope this study provides insights into RL-driven reasoning for 3D generation. Code is released at https://github.com/Ivan-Tang-3D/3DGen-R1.

  4. OPV: Outcome-based Process Verifier for Efficient Long Chain-of-Thought Verification

    Large language models (LLMs) have achieved significant progress in solving complex reasoning tasks by Reinforcement Learning with Verifiable Rewards (RLVR). This advancement is also inseparable from the oversight automated by reliable verifiers. However, current outcome-based verifiers (OVs) are unable to inspect the unreliable intermediate steps in the long reasoning chains of thought (CoTs). Meanwhile, current process-based verifiers (PVs) have difficulties in reliably detecting errors in the complex long CoTs, limited by the scarcity of high-quality annotations due to the prohibitive costs of human annotations. Therefore, we propose the Outcome-based Process Verifier (OPV), which verifies the rationale process of summarized outcomes from long CoTs to achieve both accurate and efficient verification and enable large-scale annotation. To empower the proposed verifier, we adopt an iterative active learning framework with expert annotations to progressively improve the verification capability of OPV with fewer annotation costs. Specifically, in each iteration, the most uncertain cases of the current best OPV are annotated and then subsequently used to train a new OPV through Rejection Fine-Tuning (RFT) and RLVR for the next round. Extensive experiments demonstrate OPV's superior performance and broad applicability. It achieves new state-of-the-art results on our held-out OPV-Bench, outperforming much larger open-source models such as Qwen3-Max-Preview with an F1 score of 83.1 compared to 76.3. Furthermore, OPV effectively detects false positives within synthetic dataset, closely align with expert assessment. When collaborating with policy models, OPV consistently yields performance gains, e.g., raising the accuracy of DeepSeek-R1-Distill-Qwen-32B from 55.2% to 73.3% on AIME2025 as the compute budget scales.

  5. Achieving Olympia-Level Geometry Large Language Model Agent via Complexity Boosting Reinforcement Learning

    Large language model (LLM) agents exhibit strong mathematical problem-solving abilities and can even solve International Mathematical Olympiad (IMO) level problems with the assistance of formal proof systems. However, due to weak heuristics for auxiliary constructions, AI for geometry problem solving remains dominated by expert models such as AlphaGeometry 2, which rely heavily on large-scale data synthesis and search for both training and evaluation. In this work, we make the first attempt to build a medalist-level LLM agent for geometry and present InternGeometry. InternGeometry overcomes the heuristic limitations in geometry by iteratively proposing propositions and auxiliary constructions, verifying them with a symbolic engine, and reflecting on the engine's feedback to guide subsequent proposals. A dynamic memory mechanism enables InternGeometry to conduct more than two hundred interactions with the symbolic engine per problem. To further accelerate learning, we introduce Complexity-Boosting Reinforcement Learning (CBRL), which gradually increases the complexity of synthesized problems across training stages. Built on InternThinker-32B, InternGeometry solves 44 of 50 IMO geometry problems (2000-2024), exceeding the average gold medalist score (40.9), using only 13K training examples, just 0.004% of the data used by AlphaGeometry 2, demonstrating the potential of LLM agents on expert-level geometry tasks. InternGeometry can also propose novel auxiliary constructions for IMO problems that do not appear in human solutions. We will release the model, data, and symbolic engine to support future research.

  6. MoCapAnything: Unified 3D Motion Capture for Arbitrary Skeletons from Monocular Videos

    Motion capture now underpins content creation far beyond digital humans, yet most existing pipelines remain species- or template-specific. We formalize this gap as Category-Agnostic Motion Capture (CAMoCap): given a monocular video and an arbitrary rigged 3D asset as a prompt, the goal is to reconstruct a rotation-based animation such as BVH that directly drives the specific asset. We present MoCapAnything, a reference-guided, factorized framework that first predicts 3D joint trajectories and then recovers asset-specific rotations via constraint-aware inverse kinematics. The system contains three learnable modules and a lightweight IK stage: (1) a Reference Prompt Encoder that extracts per-joint queries from the asset's skeleton, mesh, and rendered images; (2) a Video Feature Extractor that computes dense visual descriptors and reconstructs a coarse 4D deforming mesh to bridge the gap between video and joint space; and (3) a Unified Motion Decoder that fuses these cues to produce temporally coherent trajectories. We also curate Truebones Zoo with 1038 motion clips, each providing a standardized skeleton-mesh-render triad. Experiments on both in-domain benchmarks and in-the-wild videos show that MoCapAnything delivers high-quality skeletal animations and exhibits meaningful cross-species retargeting across heterogeneous rigs, enabling scalable, prompt-driven 3D motion capture for arbitrary assets. Project page: https://animotionlab.github.io/MoCapAnything/

  7. From Macro to Micro: Benchmarking Microscopic Spatial Intelligence on Molecules via Vision-Language Models

    This paper introduces the concept of Microscopic Spatial Intelligence (MiSI), the capability to perceive and reason about the spatial relationships of invisible microscopic entities, which is fundamental to scientific discovery. To assess the potential of Vision-Language Models (VLMs) in this domain, we propose a systematic benchmark framework MiSI-Bench. This framework features over 163,000 question-answer pairs and 587,000 images derived from approximately 4,000 molecular structures, covering nine complementary tasks that evaluate abilities ranging from elementary spatial transformations to complex relational identifications. Experimental results reveal that current state-of-the-art VLMs perform significantly below human level on this benchmark. However, a fine-tuned 7B model demonstrates substantial potential, even surpassing humans in spatial transformation tasks, while its poor performance in scientifically-grounded tasks like hydrogen bond recognition underscores the necessity of integrating explicit domain knowledge for progress toward scientific AGI. The datasets are available at https://huggingface.co/datasets/zongzhao/MiSI-bench.

  8. VQRAE: Representation Quantization Autoencoders for Multimodal Understanding, Generation and Reconstruction

    Unifying multimodal understanding, generation and reconstruction representation in a single tokenizer remains a key challenge in building unified models. Previous research predominantly attempts to address this in a dual encoder paradigm, e.g., utilizing the separate encoders for understanding and generation respectively or balancing semantic representations and low-level features with contrastive loss. In this paper, we propose VQRAE, a Vector Quantization version of Representation AutoEncoders, which pioneers the first exploration in unified representation to produce Continuous semantic features for image understanding and Discrete tokens for visual generation within a unified tokenizer. Specifically, we build upon pretrained vision foundation models with a symmetric ViT decoder and adopt a two-stage training strategy: first, it freezes the encoder and learns a high-dimensional semantic VQ codebook with pixel reconstruction objective; then jointly optimizes the encoder with self-distillation constraints. This design enables negligible semantic information for maintaining the ability of multimodal understanding, discrete tokens that are compatible for generation and fine-grained reconstruction. Besides, we identify the intriguing property in quantizing semantic encoders that rely on high-dimensional codebook in contrast to the previous common practice of low-dimensional codebook in image reconstruction. The semantic VQ codebook can achieve a 100% utilization ratio at a dimension of 1536. VQRAE presents competitive performance on several benchmarks of visual understanding, generation and reconstruction with promising scaling property in the autoregressive paradigm for its discrete merits.

  9. BEAVER: An Efficient Deterministic LLM Verifier

    As large language models (LLMs) transition from research prototypes to production systems, practitioners often need reliable methods to verify that model outputs satisfy required constraints. While sampling-based estimates provide an intuition of model behavior, they offer no sound guarantees. We present BEAVER, the first practical framework for computing deterministic, sound probability bounds on LLM constraint satisfaction. Given any prefix-closed semantic constraint, BEAVER systematically explores the generation space using novel token trie and frontier data structures, maintaining provably sound bounds at every iteration. We formalize the verification problem, prove soundness of our approach, and evaluate BEAVER on correctness verification, privacy verification and secure code generation tasks across multiple state of the art LLMs. BEAVER achieves 6 to 8 times tighter probability bounds and identifies 3 to 4 times more high risk instances compared to baseline methods under identical computational budgets, enabling precise characterization and risk assessment that loose bounds or empirical evaluation cannot provide.

  10. Thinking with Images via Self-Calling Agent

    Thinking-with-images paradigms have showcased remarkable visual reasoning capability by integrating visual information as dynamic elements into the Chain-of-Thought (CoT). However, optimizing interleaved multimodal CoT (iMCoT) through reinforcement learning remains challenging, as it relies on scarce high-quality reasoning data. In this study, we propose Self-Calling Chain-of-Thought (sCoT), a novel visual reasoning paradigm that reformulates iMCoT as a language-only CoT with self-calling. Specifically, a main agent decomposes the complex visual reasoning task to atomic subtasks and invokes its virtual replicas, i.e. parameter-sharing subagents, to solve them in isolated context. sCoT enjoys substantial training effectiveness and efficiency, as it requires no explicit interleaving between modalities. sCoT employs group-relative policy optimization to reinforce effective reasoning behavior to enhance optimization. Experiments on HR-Bench 4K show that sCoT improves the overall reasoning performance by up to 1.9% with sim 75% fewer GPU hours compared to strong baseline approaches. Code is available at https://github.com/YWenxi/think-with-images-through-self-calling.

  11. Evaluating Gemini Robotics Policies in a Veo World Simulator

    Generative world models hold significant potential for simulating interactions with visuomotor policies in varied environments. Frontier video models can enable generation of realistic observations and environment interactions in a scalable and general manner. However, the use of video models in robotics has been limited primarily to in-distribution evaluations, i.e., scenarios that are similar to ones used to train the policy or fine-tune the base video model. In this report, we demonstrate that video models can be used for the entire spectrum of policy evaluation use cases in robotics: from assessing nominal performance to out-of-distribution (OOD) generalization, and probing physical and semantic safety. We introduce a generative evaluation system built upon a frontier video foundation model (Veo). The system is optimized to support robot action conditioning and multi-view consistency, while integrating generative image-editing and multi-view completion to synthesize realistic variations of real-world scenes along multiple axes of generalization. We demonstrate that the system preserves the base capabilities of the video model to enable accurate simulation of scenes that have been edited to include novel interaction objects, novel visual backgrounds, and novel distractor objects. This fidelity enables accurately predicting the relative performance of different policies in both nominal and OOD conditions, determining the relative impact of different axes of generalization on policy performance, and performing red teaming of policies to expose behaviors that violate physical or semantic safety constraints. We validate these capabilities through 1600+ real-world evaluations of eight Gemini Robotics policy checkpoints and five tasks for a bimanual manipulator.

  12. StereoSpace: Depth-Free Synthesis of Stereo Geometry via End-to-End Diffusion in a Canonical Space

    We introduce StereoSpace, a diffusion-based framework for monocular-to-stereo synthesis that models geometry purely through viewpoint conditioning, without explicit depth or warping. A canonical rectified space and the conditioning guide the generator to infer correspondences and fill disocclusions end-to-end. To ensure fair and leakage-free evaluation, we introduce an end-to-end protocol that excludes any ground truth or proxy geometry estimates at test time. The protocol emphasizes metrics reflecting downstream relevance: iSQoE for perceptual comfort and MEt3R for geometric consistency. StereoSpace surpasses other methods from the warp & inpaint, latent-warping, and warped-conditioning categories, achieving sharp parallax and strong robustness on layered and non-Lambertian scenes. This establishes viewpoint-conditioned diffusion as a scalable, depth-free solution for stereo generation.

  13. Stronger Normalization-Free Transformers

    Although normalization layers have long been viewed as indispensable components of deep learning architectures, the recent introduction of Dynamic Tanh (DyT) has demonstrated that alternatives are possible. The point-wise function DyT constrains extreme values for stable convergence and reaches normalization-level performance; this work seeks further for function designs that can surpass it. We first study how the intrinsic properties of point-wise functions influence training and performance. Building on these findings, we conduct a large-scale search for a more effective function design. Through this exploration, we introduce Derf(x) = erf(αx + s), where erf(x) is the rescaled Gaussian cumulative distribution function, and identify it as the most performant design. Derf outperforms LayerNorm, RMSNorm, and DyT across a wide range of domains, including vision (image recognition and generation), speech representation, and DNA sequence modeling. Our findings suggest that the performance gains of Derf largely stem from its improved generalization rather than stronger fitting capacity. Its simplicity and stronger performance make Derf a practical choice for normalization-free Transformer architectures.

  14. MoRel: Long-Range Flicker-Free 4D Motion Modeling via Anchor Relay-based Bidirectional Blending with Hierarchical Densification

    Recent advances in 4D Gaussian Splatting (4DGS) have extended the high-speed rendering capability of 3D Gaussian Splatting (3DGS) into the temporal domain, enabling real-time rendering of dynamic scenes. However, one of the major remaining challenges lies in modeling long-range motion-contained dynamic videos, where a naive extension of existing methods leads to severe memory explosion, temporal flickering, and failure to handle appearing or disappearing occlusions over time. To address these challenges, we propose a novel 4DGS framework characterized by an Anchor Relay-based Bidirectional Blending (ARBB) mechanism, named MoRel, which enables temporally consistent and memory-efficient modeling of long-range dynamic scenes. Our method progressively constructs locally canonical anchor spaces at key-frame time index and models inter-frame deformations at the anchor level, enhancing temporal coherence. By learning bidirectional deformations between KfA and adaptively blending them through learnable opacity control, our approach mitigates temporal discontinuities and flickering artifacts. We further introduce a Feature-variance-guided Hierarchical Densification (FHD) scheme that effectively densifies KfA's while keeping rendering quality, based on an assigned level of feature-variance. To effectively evaluate our model's capability to handle real-world long-range 4D motion, we newly compose long-range 4D motion-contained dataset, called SelfCap_{LR}. It has larger average dynamic motion magnitude, captured at spatially wider spaces, compared to previous dynamic video datasets. Overall, our MoRel achieves temporally coherent and flicker-free long-range 4D reconstruction while maintaining bounded memory usage, demonstrating both scalability and efficiency in dynamic Gaussian-based representations.

  15. H2R-Grounder: A Paired-Data-Free Paradigm for Translating Human Interaction Videos into Physically Grounded Robot Videos

    Robots that learn manipulation skills from everyday human videos could acquire broad capabilities without tedious robot data collection. We propose a video-to-video translation framework that converts ordinary human-object interaction videos into motion-consistent robot manipulation videos with realistic, physically grounded interactions. Our approach does not require any paired human-robot videos for training only a set of unpaired robot videos, making the system easy to scale. We introduce a transferable representation that bridges the embodiment gap: by inpainting the robot arm in training videos to obtain a clean background and overlaying a simple visual cue (a marker and arrow indicating the gripper's position and orientation), we can condition a generative model to insert the robot arm back into the scene. At test time, we apply the same process to human videos (inpainting the person and overlaying human pose cues) and generate high-quality robot videos that mimic the human's actions. We fine-tune a SOTA video diffusion model (Wan 2.2) in an in-context learning manner to ensure temporal coherence and leveraging of its rich prior knowledge. Empirical results demonstrate that our approach achieves significantly more realistic and grounded robot motions compared to baselines, pointing to a promising direction for scaling up robot learning from unlabeled human videos. Project page: https://showlab.github.io/H2R-Grounder/

Solidot(15)

  1. 瑞士考虑将人口上限设置在 1000 万

    极右翼政党支持率的不断攀升施压欧洲各国政府加强移民管控。瑞士即将就一项提案进行投票,该提案把移民管控推向了新的高度——设定人口上限。如果瑞士居民人数从目前的约 900 万增至 1000 万以上,那么根据该提案瑞士可能全面禁止新移民入境,不管难民、技术工人还是年薪六位数的高级经理都一视同仁。根据瑞士的全民公投政策,公民预计会在明年就该提案进行投票表决。而民调显示该提案很可能会获得批准。限制人口流动从来不会有利于经济发展,全面限制新移民被认为可能会导致瑞士关键技能人才缺乏,损害国家竞争力。投票结果将展示公民为了维护国家的吸引力而愿意做出怎样的选择。右翼的瑞士人民党(Swiss People's Party)在上次选举中赢得了 28% 的选票,其竞选纲领将瑞士公民身份刻画成一种特权而非权利。该党在 2023 年提出了限制人口数量的设想,将其包装成一种保护瑞士生活方式和防止环境受到过度人类活动破坏的方法。

  2. 2024 年自由软件奖宣布

    自由软件基金会(FSF)在 2025 年底宣布了 2024 年度的自由软件奖得主。社会公益项目奖授予了众包事实核查政府地址、电话号码、网站和社交媒体账号的服务 Govdirectory,此前获得该奖的项目包括了 OpenStreetMap、Public Lab 和 Let's Encryp。杰出新自由软件贡献者奖授予了 GIMP 的贡献者 Alx Sa。自由软件进步奖授予了资深开发者 Andy Wingo,他是 GNU Guile 项目的维护者之一。

  3. 迪士尼与 Open AI 展开合作

    迪士尼改变了其对于 AI 公司使用其版权角色的立场,宣布与 Open AI 达成合作,向 Open AI 投资 10 亿美元,获得额外认股权证的权利。作为协议的一部分,迪士尼允许 Open AI 使用其逾 200 个版权角色制作短视频和图像,这些角色来自迪士尼、漫威、星球大战和皮克斯。新功能预计将于 2026 年通过 OpenAI 的视频生成平台 Sora 和 ChatGPT 推出。部分用户创作的短视频也将会在 Disney+ 上推出。协议不包含任何角色肖像或声音的使用权。迪士尼的员工也将可以使用 OpenAI 工具构建新产品。

  4. 日本年度汉字“熊”

    日本汉字能力检定协会宣布,“熊”字当选最能反映今年世态民情的年度汉字。理由是各地接连有熊出没并造成了损失。在京都市的清水寺,住持森清范在长约 1.5 米、宽约 1.3 米的和纸上挥毫写下“熊”字。评选年度汉字始于 1995 年,今年是第 31 次。协会通过官网和明信片接受来自全国的应征,然后选出得票最多的汉字。

  5. 教授用秘密摄像头发现博士生破坏另一位博士生的电脑

    加州伯克利的一位教授感觉不对劲,学校因计算机损坏损失了 46,855 美元,而几乎所有损坏都与一名博士生相关。是这位博士生运气太差还是另有隐情?在征得大楼管理员的同意后,教授在一台笔记本电脑上秘密安装了一个摄像头,摄像头对准了这位博士生的计算机。摄像头拍下了另一名博士生、26 岁的 Jiarui Zou 使用工具破坏该生笔记本电脑的镜头。Jiarui Zou 因在 11 月 9 日至 10 日期间破坏三台电脑而被控三项故意破坏财物罪。他每次造成的损失都超过 400 美元。Zou 还被怀疑与多年来持续发生的其它破坏事件有关。Zou 于 11 月 12 日在宿舍楼被捕,目前已获释,将于 12 月 15 日首次出庭。

  6. 思科股价超过互联网泡沫期间创下的峰值记录

    网络巨头思科股价周三达到了 80.25 美元,终于超过了 25 年前互联网泡沫(dotcom-era)期间创下的 80.06 美元峰值记录。互联网泡沫期思科的市值曾在短时间内超过微软成为全世界市值最高的公司。思科股价的复苏之路花了 25 年 8 个月零 13 天。思科的基本面在此期间有显著改善:自 1999 年以来,营收增长近五倍,利润增长四倍,每股收益增长八倍,利润率保持在健康水平。但在峰值期间买入思科股票的投资者在长达 25 年里还是因通货膨胀而蒙受损失。思科的轨迹与英伟达有相似之处,作为最大的 AI 芯片供应商英伟达因 AI 热而一跃成为全世界市值最高的企业,其市盈率超过 45 倍,企业价值接近 24 倍销售额。思科在 2000 年市盈率超过 200 倍,企业价值 31 倍销售额。

  7. Pop!_OS 24.04 LTS 释出

    Linux PC 制造商 System76 开发的发行版 Pop!_OS 在延期了一年半之后终于释出了 Pop!_OS 24.04 LTS,它同时发布了用 Rust 语言重写的桌面环境 COSMIC。Pop!_OS 是基于 Ubuntu LTS 版,原计划与 Ubuntu LTS 发布步伐保持一致,但因为 COSMIC 开发进度滞后而延期。Pop!_OS 24.04 LTS 的主要变化包括:支持 ARM 架构的计算机;新的混合显卡支持,用户可以右键应用选择运行在独显还是集显上,需要独显运行的应用会自动运行在独显上;更好的硬件支持;等等。

  8. Do Kwon 被判 15 年

    Terraform Labs 创始人 Do Kwon 因欺诈罪和合谋罪被美国纽约联邦法院判处 15 年徒刑。他还面临韩国的指控,根据认罪协议,在美国服完一半刑期之后可以引渡到韩国受审。Terraform 发行了被称为 Terra USD(UST)的算法稳定币,2022 年 5 月 UST 币值崩溃导致关联代币 LUNA 几乎跌至一文不值,客户损失了 400 亿美元。Do Kwon 是韩国公民,他的公司总部位于新加坡,他于 2023 年 3 月在黑山被捕,2024 年底被引渡到美国受审。今年 8 月他承认在声称其代币能在加密货币市场波动期间保持价格稳定上误导投资者。

  9. 美国可能要求入境游客提供五年社交媒体记录

    根据美国官员披露的一份新提议,英国等国的入境游客可能需要提供五年的社交媒体历史记录。新规定将影响以前只要填写电子旅行授权系统(ESTA)表格即可免签入境美国停留 90 天的数十个国家的游客。自今年 1 月重返白宫以来,特朗普一直以国家安全的名义加强美国边境管控。分析人士认为这项新计划可能会阻碍潜在游客入境,或损害他们的数字权利。当被问及此举是否会导致旅游业大幅下滑时,特朗普表示并不担心,表示要确保不让不该来的人进入美国。

  10. HDMI Forum 继续阻挠 Linux 上的 HDMI 2.1 实现

    因 HDMI 标准授权组织 HDMI Forum 在 2021 年关闭了 HDMI 2.1 规格的公开访问,开源驱动的发布需要征得 HDMI Forum 的批准,2024 年它拒绝了 AMD 发布开源驱动的尝试,阻止了 AMD 的 FreeSync 在 Linux 平台通过 HDMI 连接支持 120 Hz@4K 或 240 Hz@5K 显示。如今 Steam Machine 的开发商 Valve 证实 HDMI Forum 在继续阻挠 Linux 上的 HDMI 2.1 实现。Steam Machine 在理论上支持 HDMI 2.1,但被软件限制为 HDMI 2.0,基本上难以实现超过 60 Hz@4K 的显示。Valve 表示已在 Windows 下验证了 HDMI 2.1 硬件。

  11. 在俄罗斯发射台事故之后 NASA 提前发射 Dragon 货运飞船

    上月底,俄罗斯在发射联盟号 MS-28 载人飞船时,火箭底部的移动维护舱由于未正确固定导致倒扣坠落而严重受损。俄罗斯宇航局表示修复发射台以及恢复其发射能力至少需要四个月时间。该发射台是俄罗斯唯一一个能向国际空间站发射联盟号载人飞船和进步号无人货船的发射场,原计划于 12 月 21 日发射的无人货船 MS-33 已经取消。为了确保空间站宇航员在 2026 年有足够的补给 NASA 计划提前发射 Dragon 货运飞船。下一次 CRS-34 Dragon 补给任务从原计划的 2026 年 6 月提前到 5 月,再下一次 CRS-35 任务从 2026 年 11 月提前三月至 8 月。俄罗斯能否在四个月内修复发射场基础设施还有待观察,战争和低温都是维修工作面临的障碍。除了 SpaceX 的飞船,NASA 还有其它选择:诺斯罗普格鲁曼的 Cygnus 货运飞船最早将在 2026 年 4 月发射,日本的新货运飞船 HTV-X 也计划在明年夏天发射。

  12. NASA 与火星探测器 MAVEN 失去联络

    NASA 宣布与火星探测器 MAVEN 失去联络。NASA 在火星轨道上有三艘探测器,包括了 2001 年发射的 Mars Odysse 探测器,2005 年发射的 Mars Reconnaissance Orbiter(MRO)探测器,以及 2013 年发射的 Mars Atmosphere and Volatile Evolution(MAVEN)。MAVEN 属于三艘中服役时间最短的探测器,另外两艘都接近寿命终点——Mars Odyssey 的燃料即将耗尽,MRO 的燃料倒还可以维持到 2030 年代。MAVEN 由洛克希德马丁公司建造,旨在研究太阳与火星大气层之间的相互作用。地面团队最后一次收到 MAVEN 信号是在 12 月 6 日,此前的遥测数据显示探测器所有子系统都工作正常。NASA 表示正在调查并采取相应措施,一旦有更多信息将会立即公布。

  13. 校园供餐能略微提升学生学习成绩

    校园供餐计划旨在减少饥饿并提升儿童的学习能力、专注力及整体健康。中、低收入国家约占全球营养不良现象的 90%。最新研究涉及 9.1 万名中、小学生。大多数研究来自中、低收入国家。整体而言,研究作者发现,中、低收入国家的校园供餐计划可使学生的数学测验成绩与入学率略有提升,并可能促进儿童在年龄别身高及年龄别体重等相对成长指标上的些微改善。供餐计划对阅读测验成绩及学校出席率可能影响不大或几乎没有影响。研究的第一作者、渥太华大学荣誉教授 Elizabeth Kristjansson 表示:校园供餐计划在改善弱势儿童的健康与教育成果方面扮演关键角色。我们看到的成效虽然不大,但是真实存在。就我来看,喂饱饥饿的孩子是一项道德责任。

  14. Operation Bluebird 想使用 Twitter 的名字推出新社交网络

    马斯克的 X 平台已经弃用了他收购时使用的 Twitter 名称、商标以及相关 logo。一家叫 Operation Bluebird 的新创公司向 USPTO 申请撤销 X 的 Twitter 和 tweet 商标,希望以 Twitter 的名字推出新社交网络,吸引现有用户重现 Twitter 旧日的辉煌。Operation Bluebird 已经发布了 Twitter.new 的原型,正邀请用户预留用户名。创始人 Michael Peroff 表示,类 Twitter 社交网络如 Threads、Mastodon 和 Bluesky 都没有达到 Twitter 当年的规模和知名度。

  15. 印度提议对 AI 公司用版权作品训练模型收取费用

    印度工业和内部贸易促进部发表了一项提议框架,允许 AI 公司使用所有受版权保护的作品训练模型,但需向一个由版权所有者组织组成的新收款机构支付版税,版税随后将分配给创作者。该提案认为这种“强制性一揽子许可”将降低 AI 公司的合规成本,同时确保作家、音乐家、艺术家等版权所有者在其作品被用于训练商业模型时获得补偿。