OrangeBot.AI Digest — 2025-12-14
60 headlines across 8 sources, aggregated for this day.
Hacker News(15)
- Anthropic Outage for Opus 4.5 and Sonnet 4/4.5 across all services (status.claude.com)
- 2002: Last.fm and Audioscrobbler Herald the Social Web (cybercultural.com)
- Ask HN: What Are You Working On? (December 2025)
- Stop crawling my HTML – use the API (shkspr.mobi)
- GraphQL: The enterprise honeymoon is over (johnjames.blog)
- Developing a food-safe finish for my wooden spoons (alinpanaitiu.com)
- Hashcards: A plain-text spaced repetition system (borretti.me)
- iOS 26.2 fixes 20 security vulnerabilities, 2 actively exploited (www.macrumors.com)
- Shai-Hulud compromised a dev machine and raided GitHub org access: a post-mortem (trigger.dev)
- The Gorman Paradox: Where Are All the AI-Generated Apps? (codemanship.wordpress.com)
- Apple Maps claims it's 29,905 miles away (mathstodon.xyz)
- Kimi K2 1T model runs on 2 512GB M3 Ultras (twitter.com)
- AI and the ironies of automation – Part 2 (www.ufried.com)
- Europeans' health data sold to US firm run by ex-Israeli spies (www.ftm.eu)
- Bye, Mom (aella.substack.com)
GitHub Trending(15)
- simstudioai / sim
Open-source platform to build and deploy AI agent workflows.
- openai / codex
Lightweight coding agent that runs in your terminal
- mdn / content
The official source for MDN Web Docs content. Home to over 14,000 pages of documentation about HTML, CSS, JS, HTTP, Web APIs, and more.
- Morganamilo / paru
Feature packed AUR helper
- Mebus / cupp
Common User Passwords Profiler (CUPP)
- ZJU-LLMs / Foundations-of-LLMs
A book for Learning the Foundations of LLMs
- daytonaio / daytona
Daytona is a Secure and Elastic Infrastructure for Running AI-Generated Code
- shadcn-ui / ui
A set of beautifully-designed, accessible components and a code distribution platform. Works with your favorite frameworks. Open Source. Open Code.
- datawhalechina / hello-agents
📚 《从零开始构建智能体》——从零开始的智能体原理与实践教程
- HuLaSpark / HuLa
🍀 A cross-platform instant messaging desktop application with exceptional performance built on Rust + Vue3, compatible with Windows, macOS, Linux, Android, and iOS(一款基于Rust+Vue3极致性能的跨平台即时通讯桌面应用,兼容Windows、MacOS、Linux、Android、IOS)
- thedotmack / claude-mem
A Claude Code plugin that automatically captures everything Claude does during your coding sessions, compresses it with AI (using Claude's agent-sdk), and injects relevant context back into future sessions.
- thinking-machines-lab / tinker-cookbook
Post-training with Tinker
- tursodatabase / turso
Turso is an in-process SQL database, compatible with SQLite.
- Tencent / WeKnora
LLM-powered framework for deep document understanding, semantic retrieval, and context-aware answers using RAG paradigm.
- virattt / ai-hedge-fund
An AI Hedge Fund Team
Hugging Face(15)
- T-pro 2.0: An Efficient Russian Hybrid-Reasoning Model and Playground
We introduce T-pro 2.0, an open-weight Russian LLM for hybrid reasoning and efficient inference. The model supports direct answering and reasoning-trace generation, using a Cyrillic-dense tokenizer and an adapted EAGLE speculative-decoding pipeline to reduce latency. To enable reproducible and extensible research, we release the model weights, the T-Wix 500k instruction corpus, the T-Math reasoning benchmark, and the EAGLE weights on Hugging Face. These resources allow users to study Russian-language reasoning and to extend or adapt both the model and the inference pipeline. A public web demo exposes reasoning and non-reasoning modes and illustrates the speedups achieved by our inference stack across domains. T-pro 2.0 thus serves as an accessible open system for building and evaluating efficient, practical Russian LLM applications.
- Long-horizon Reasoning Agent for Olympiad-Level Mathematical Problem Solving
Large language models (LLMs) have achieved significant progress in solving complex reasoning tasks by Reinforcement Learning with Verifiable Rewards (RLVR). This advancement is also inseparable from the oversight automated by reliable verifiers. However, current outcome-based verifiers (OVs) are unable to inspect the unreliable intermediate steps in the long reasoning chains of thought (CoTs). Meanwhile, current process-based verifiers (PVs) have difficulties in reliably detecting errors in the complex long CoTs, limited by the scarcity of high-quality annotations due to the prohibitive costs of human annotations. Therefore, we propose the Outcome-based Process Verifier (OPV), which verifies the rationale process of summarized outcomes from long CoTs to achieve both accurate and efficient verification and enable large-scale annotation. To empower the proposed verifier, we adopt an iterative active learning framework with expert annotations to progressively improve the verification capability of OPV with fewer annotation costs. Specifically, in each iteration, the most uncertain cases of the current best OPV are annotated and then subsequently used to train a new OPV through Rejection Fine-Tuning (RFT) and RLVR for the next round. Extensive experiments demonstrate OPV's superior performance and broad applicability. It achieves new state-of-the-art results on our held-out \thisbench, outperforming much larger open-source models such as Qwen3-Max-Preview with an F1 score of 83.1 compared to 76.3. Furthermore, OPV effectively detects false positives within synthetic dataset, closely align with expert assessment. When collaborating with policy models, OPV consistently yields performance gains, e.g., raising the accuracy of DeepSeek-R1-Distill-Qwen-32B from 55.2\% to 73.3\% on AIME2025 as the compute budget scales.
- Are We Ready for RL in Text-to-3D Generation? A Progressive Investigation
Reinforcement learning (RL), earlier proven to be effective in large language and multi-modal models, has been successfully extended to enhance 2D image generation recently. However, applying RL to 3D generation remains largely unexplored due to the higher spatial complexity of 3D objects, which require globally consistent geometry and fine-grained local textures. This makes 3D generation significantly sensitive to reward designs and RL algorithms. To address these challenges, we conduct the first systematic study of RL for text-to-3D autoregressive generation across several dimensions. (1) Reward designs: We evaluate reward dimensions and model choices, showing that alignment with human preference is crucial, and that general multi-modal models provide robust signal for 3D attributes. (2) RL algorithms: We study GRPO variants, highlighting the effectiveness of token-level optimization, and further investigate the scaling of training data and iterations. (3) Text-to-3D Benchmarks: Since existing benchmarks fail to measure implicit reasoning abilities in 3D generation models, we introduce MME-3DR. (4) Advanced RL paradigms: Motivated by the natural hierarchy of 3D generation, we propose Hi-GRPO, which optimizes the global-to-local hierarchical 3D generation through dedicated reward ensembles. Based on these insights, we develop AR3D-R1, the first RL-enhanced text-to-3D model, expert from coarse shape to texture refinement. We hope this study provides insights into RL-driven reasoning for 3D generation. Code is released at https://github.com/Ivan-Tang-3D/3DGen-R1.
- OPV: Outcome-based Process Verifier for Efficient Long Chain-of-Thought Verification
Large language models (LLMs) have achieved significant progress in solving complex reasoning tasks by Reinforcement Learning with Verifiable Rewards (RLVR). This advancement is also inseparable from the oversight automated by reliable verifiers. However, current outcome-based verifiers (OVs) are unable to inspect the unreliable intermediate steps in the long reasoning chains of thought (CoTs). Meanwhile, current process-based verifiers (PVs) have difficulties in reliably detecting errors in the complex long CoTs, limited by the scarcity of high-quality annotations due to the prohibitive costs of human annotations. Therefore, we propose the Outcome-based Process Verifier (OPV), which verifies the rationale process of summarized outcomes from long CoTs to achieve both accurate and efficient verification and enable large-scale annotation. To empower the proposed verifier, we adopt an iterative active learning framework with expert annotations to progressively improve the verification capability of OPV with fewer annotation costs. Specifically, in each iteration, the most uncertain cases of the current best OPV are annotated and then subsequently used to train a new OPV through Rejection Fine-Tuning (RFT) and RLVR for the next round. Extensive experiments demonstrate OPV's superior performance and broad applicability. It achieves new state-of-the-art results on our held-out OPV-Bench, outperforming much larger open-source models such as Qwen3-Max-Preview with an F1 score of 83.1 compared to 76.3. Furthermore, OPV effectively detects false positives within synthetic dataset, closely align with expert assessment. When collaborating with policy models, OPV consistently yields performance gains, e.g., raising the accuracy of DeepSeek-R1-Distill-Qwen-32B from 55.2% to 73.3% on AIME2025 as the compute budget scales.
- Achieving Olympia-Level Geometry Large Language Model Agent via Complexity Boosting Reinforcement Learning
Large language model (LLM) agents exhibit strong mathematical problem-solving abilities and can even solve International Mathematical Olympiad (IMO) level problems with the assistance of formal proof systems. However, due to weak heuristics for auxiliary constructions, AI for geometry problem solving remains dominated by expert models such as AlphaGeometry 2, which rely heavily on large-scale data synthesis and search for both training and evaluation. In this work, we make the first attempt to build a medalist-level LLM agent for geometry and present InternGeometry. InternGeometry overcomes the heuristic limitations in geometry by iteratively proposing propositions and auxiliary constructions, verifying them with a symbolic engine, and reflecting on the engine's feedback to guide subsequent proposals. A dynamic memory mechanism enables InternGeometry to conduct more than two hundred interactions with the symbolic engine per problem. To further accelerate learning, we introduce Complexity-Boosting Reinforcement Learning (CBRL), which gradually increases the complexity of synthesized problems across training stages. Built on InternThinker-32B, InternGeometry solves 44 of 50 IMO geometry problems (2000-2024), exceeding the average gold medalist score (40.9), using only 13K training examples, just 0.004% of the data used by AlphaGeometry 2, demonstrating the potential of LLM agents on expert-level geometry tasks. InternGeometry can also propose novel auxiliary constructions for IMO problems that do not appear in human solutions. We will release the model, data, and symbolic engine to support future research.
- BEAVER: An Efficient Deterministic LLM Verifier
As large language models (LLMs) transition from research prototypes to production systems, practitioners often need reliable methods to verify that model outputs satisfy required constraints. While sampling-based estimates provide an intuition of model behavior, they offer no sound guarantees. We present BEAVER, the first practical framework for computing deterministic, sound probability bounds on LLM constraint satisfaction. Given any prefix-closed semantic constraint, BEAVER systematically explores the generation space using novel token trie and frontier data structures, maintaining provably sound bounds at every iteration. We formalize the verification problem, prove soundness of our approach, and evaluate BEAVER on correctness verification, privacy verification and secure code generation tasks across multiple state of the art LLMs. BEAVER achieves 6 to 8 times tighter probability bounds and identifies 3 to 4 times more high risk instances compared to baseline methods under identical computational budgets, enabling precise characterization and risk assessment that loose bounds or empirical evaluation cannot provide.
- MoCapAnything: Unified 3D Motion Capture for Arbitrary Skeletons from Monocular Videos
Motion capture now underpins content creation far beyond digital humans, yet most existing pipelines remain species- or template-specific. We formalize this gap as Category-Agnostic Motion Capture (CAMoCap): given a monocular video and an arbitrary rigged 3D asset as a prompt, the goal is to reconstruct a rotation-based animation such as BVH that directly drives the specific asset. We present MoCapAnything, a reference-guided, factorized framework that first predicts 3D joint trajectories and then recovers asset-specific rotations via constraint-aware inverse kinematics. The system contains three learnable modules and a lightweight IK stage: (1) a Reference Prompt Encoder that extracts per-joint queries from the asset's skeleton, mesh, and rendered images; (2) a Video Feature Extractor that computes dense visual descriptors and reconstructs a coarse 4D deforming mesh to bridge the gap between video and joint space; and (3) a Unified Motion Decoder that fuses these cues to produce temporally coherent trajectories. We also curate Truebones Zoo with 1038 motion clips, each providing a standardized skeleton-mesh-render triad. Experiments on both in-domain benchmarks and in-the-wild videos show that MoCapAnything delivers high-quality skeletal animations and exhibits meaningful cross-species retargeting across heterogeneous rigs, enabling scalable, prompt-driven 3D motion capture for arbitrary assets. Project page: https://animotionlab.github.io/MoCapAnything/
- Thinking with Images via Self-Calling Agent
Thinking-with-images paradigms have showcased remarkable visual reasoning capability by integrating visual information as dynamic elements into the Chain-of-Thought (CoT). However, optimizing interleaved multimodal CoT (iMCoT) through reinforcement learning remains challenging, as it relies on scarce high-quality reasoning data. In this study, we propose Self-Calling Chain-of-Thought (sCoT), a novel visual reasoning paradigm that reformulates iMCoT as a language-only CoT with self-calling. Specifically, a main agent decomposes the complex visual reasoning task to atomic subtasks and invokes its virtual replicas, i.e. parameter-sharing subagents, to solve them in isolated context. sCoT enjoys substantial training effectiveness and efficiency, as it requires no explicit interleaving between modalities. sCoT employs group-relative policy optimization to reinforce effective reasoning behavior to enhance optimization. Experiments on HR-Bench 4K show that sCoT improves the overall reasoning performance by up to 1.9% with sim 75% fewer GPU hours compared to strong baseline approaches. Code is available at https://github.com/YWenxi/think-with-images-through-self-calling.
- From Macro to Micro: Benchmarking Microscopic Spatial Intelligence on Molecules via Vision-Language Models
This paper introduces the concept of Microscopic Spatial Intelligence (MiSI), the capability to perceive and reason about the spatial relationships of invisible microscopic entities, which is fundamental to scientific discovery. To assess the potential of Vision-Language Models (VLMs) in this domain, we propose a systematic benchmark framework MiSI-Bench. This framework features over 163,000 question-answer pairs and 587,000 images derived from approximately 4,000 molecular structures, covering nine complementary tasks that evaluate abilities ranging from elementary spatial transformations to complex relational identifications. Experimental results reveal that current state-of-the-art VLMs perform significantly below human level on this benchmark. However, a fine-tuned 7B model demonstrates substantial potential, even surpassing humans in spatial transformation tasks, while its poor performance in scientifically-grounded tasks like hydrogen bond recognition underscores the necessity of integrating explicit domain knowledge for progress toward scientific AGI. The datasets are available at https://huggingface.co/datasets/zongzhao/MiSI-bench.
- Stronger Normalization-Free Transformers
Although normalization layers have long been viewed as indispensable components of deep learning architectures, the recent introduction of Dynamic Tanh (DyT) has demonstrated that alternatives are possible. The point-wise function DyT constrains extreme values for stable convergence and reaches normalization-level performance; this work seeks further for function designs that can surpass it. We first study how the intrinsic properties of point-wise functions influence training and performance. Building on these findings, we conduct a large-scale search for a more effective function design. Through this exploration, we introduce Derf(x) = erf(αx + s), where erf(x) is the rescaled Gaussian cumulative distribution function, and identify it as the most performant design. Derf outperforms LayerNorm, RMSNorm, and DyT across a wide range of domains, including vision (image recognition and generation), speech representation, and DNA sequence modeling. Our findings suggest that the performance gains of Derf largely stem from its improved generalization rather than stronger fitting capacity. Its simplicity and stronger performance make Derf a practical choice for normalization-free Transformer architectures.
- VQRAE: Representation Quantization Autoencoders for Multimodal Understanding, Generation and Reconstruction
Unifying multimodal understanding, generation and reconstruction representation in a single tokenizer remains a key challenge in building unified models. Previous research predominantly attempts to address this in a dual encoder paradigm, e.g., utilizing the separate encoders for understanding and generation respectively or balancing semantic representations and low-level features with contrastive loss. In this paper, we propose VQRAE, a Vector Quantization version of Representation AutoEncoders, which pioneers the first exploration in unified representation to produce Continuous semantic features for image understanding and Discrete tokens for visual generation within a unified tokenizer. Specifically, we build upon pretrained vision foundation models with a symmetric ViT decoder and adopt a two-stage training strategy: first, it freezes the encoder and learns a high-dimensional semantic VQ codebook with pixel reconstruction objective; then jointly optimizes the encoder with self-distillation constraints. This design enables negligible semantic information for maintaining the ability of multimodal understanding, discrete tokens that are compatible for generation and fine-grained reconstruction. Besides, we identify the intriguing property in quantizing semantic encoders that rely on high-dimensional codebook in contrast to the previous common practice of low-dimensional codebook in image reconstruction. The semantic VQ codebook can achieve a 100% utilization ratio at a dimension of 1536. VQRAE presents competitive performance on several benchmarks of visual understanding, generation and reconstruction with promising scaling property in the autoregressive paradigm for its discrete merits.
- StereoSpace: Depth-Free Synthesis of Stereo Geometry via End-to-End Diffusion in a Canonical Space
We introduce StereoSpace, a diffusion-based framework for monocular-to-stereo synthesis that models geometry purely through viewpoint conditioning, without explicit depth or warping. A canonical rectified space and the conditioning guide the generator to infer correspondences and fill disocclusions end-to-end. To ensure fair and leakage-free evaluation, we introduce an end-to-end protocol that excludes any ground truth or proxy geometry estimates at test time. The protocol emphasizes metrics reflecting downstream relevance: iSQoE for perceptual comfort and MEt3R for geometric consistency. StereoSpace surpasses other methods from the warp & inpaint, latent-warping, and warped-conditioning categories, achieving sharp parallax and strong robustness on layered and non-Lambertian scenes. This establishes viewpoint-conditioned diffusion as a scalable, depth-free solution for stereo generation.
- Evaluating Gemini Robotics Policies in a Veo World Simulator
Generative world models hold significant potential for simulating interactions with visuomotor policies in varied environments. Frontier video models can enable generation of realistic observations and environment interactions in a scalable and general manner. However, the use of video models in robotics has been limited primarily to in-distribution evaluations, i.e., scenarios that are similar to ones used to train the policy or fine-tune the base video model. In this report, we demonstrate that video models can be used for the entire spectrum of policy evaluation use cases in robotics: from assessing nominal performance to out-of-distribution (OOD) generalization, and probing physical and semantic safety. We introduce a generative evaluation system built upon a frontier video foundation model (Veo). The system is optimized to support robot action conditioning and multi-view consistency, while integrating generative image-editing and multi-view completion to synthesize realistic variations of real-world scenes along multiple axes of generalization. We demonstrate that the system preserves the base capabilities of the video model to enable accurate simulation of scenes that have been edited to include novel interaction objects, novel visual backgrounds, and novel distractor objects. This fidelity enables accurately predicting the relative performance of different policies in both nominal and OOD conditions, determining the relative impact of different axes of generalization on policy performance, and performing red teaming of policies to expose behaviors that violate physical or semantic safety constraints. We validate these capabilities through 1600+ real-world evaluations of eight Gemini Robotics policy checkpoints and five tasks for a bimanual manipulator.
- Omni-Attribute: Open-vocabulary Attribute Encoder for Visual Concept Personalization
Visual concept personalization aims to transfer only specific image attributes, such as identity, expression, lighting, and style, into unseen contexts. However, existing methods rely on holistic embeddings from general-purpose image encoders, which entangle multiple visual factors and make it difficult to isolate a single attribute. This often leads to information leakage and incoherent synthesis. To address this limitation, we introduce Omni-Attribute, the first open-vocabulary image attribute encoder designed to learn high-fidelity, attribute-specific representations. Our approach jointly designs the data and model: (i) we curate semantically linked image pairs annotated with positive and negative attributes to explicitly teach the encoder what to preserve or suppress; and (ii) we adopt a dual-objective training paradigm that balances generative fidelity with contrastive disentanglement. The resulting embeddings prove effective for open-vocabulary attribute retrieval, personalization, and compositional generation, achieving state-of-the-art performance across multiple benchmarks.
- X-Humanoid: Robotize Human Videos to Generate Humanoid Videos at Scale
The advancement of embodied AI has unlocked significant potential for intelligent humanoid robots. However, progress in both Vision-Language-Action (VLA) models and world models is severely hampered by the scarcity of large-scale, diverse training data. A promising solution is to "robotize" web-scale human videos, which has been proven effective for policy training. However, these solutions mainly "overlay" robot arms to egocentric videos, which cannot handle complex full-body motions and scene occlusions in third-person videos, making them unsuitable for robotizing humans. To bridge this gap, we introduce X-Humanoid, a generative video editing approach that adapts the powerful Wan 2.2 model into a video-to-video structure and finetunes it for the human-to-humanoid translation task. This finetuning requires paired human-humanoid videos, so we designed a scalable data creation pipeline, turning community assets into 17+ hours of paired synthetic videos using Unreal Engine. We then apply our trained model to 60 hours of the Ego-Exo4D videos, generating and releasing a new large-scale dataset of over 3.6 million "robotized" humanoid video frames. Quantitative analysis and user studies confirm our method's superiority over existing baselines: 69% of users rated it best for motion consistency, and 62.1% for embodiment correctness.
Solidot(15)
- 天文学家拍摄到类星战塔图因的系外行星
天文学家成功直接拍摄到一颗如《星球大战》中塔图因星球般绕行双星运转的系外行星。能拍到太阳系外的行星本就极为罕见,而能拍到一颗同时绕行两颗恒星的行星,更是少之又少。令人惊讶的是,这颗名为 HD 143811 AB b的系外行星距离其双母恒星约 64AU,是目前已知以直接成像方式发现、且绕行双星系统的行星中,距离母恒星最近的一颗,其轨道半径比过去同类型行星小了约六倍。这颗新行星 HD 143811 AB b 其实隐藏在多年以前的观测资料中。其质量约为木星的 6 倍,年龄约为 1,300 万年,温度高于太阳系内任何行星。 HD 143811 这个系统的结构同样引人注目:两颗恒星彼此紧密环绕(0.18 AU),每 18 天完成一次公转;而行星则以轨道半长轴约 64AU、330 年的周期,绕着这对恒星运行,与冥王星绕行太阳的时间尺度相近。
- 全球电动汽车销量今年至今增长 21%
Benchmark Mineral Intelligence 报告,2025 年 11 月全球电动汽车销量 200 万辆,今年迄今全球电动汽车销量 1850 万辆,比 2024 年同期增长 21%。欧洲 11 月的电动汽车销量增幅最高,同比增长 36%,其中纯电增长 35%,插电混动 39%,今年至今电动汽车总销量 380 万辆,比 2024 年同期增长 33%。北美因电动汽车减税政策于 9 月 30 日结束而销量下滑,今年至今电动汽车销量比 2024 年同期下降 1%。中国电动汽车销量仍然远超世界其它地区,今年至今销量增长 19% 达 1160 万辆,比亚迪 11 月电动汽车出口量创 131,935 辆的纪录,今年在欧洲销量达到 20 万辆,东南亚销量翻一番,南美销量增长逾 50%。除中国、欧洲和北美外,其它地区今年的电动汽车销量比 2024 年同期增长 48% 达到 150 万辆。
- Linux 6.19-rc1 释出,龙芯为内核加入 32 位架构 LoongArch32 支持
Linus Torvalds 通常在周日释出新版内核的 RC 版本,而美国时间的周日是北京时间的周一。Torvalds 生活在北美,因此他通常是在北京时间的周一发布新内核 RC 版本。然而本周 Torvalds 在日本参加 Linux Plumbers 大会和 Linux 内核维护者峰会,而日本的周日相当于美国的周六,他在当地时间的周日释出 Linux 6.19-rc1。Torvalds 表示这可能会让那些习惯于最后一刻递交 pull request 的人措手不及。Linux 6.19 包含了驱动、子系统和架构更新,其中一个更新是龙芯加入了 32 位架构 LoongArch32 支持。大部分 CPU 架构都从 32 位过渡到 64 位,而龙芯则反其道而行之,从 64 位过渡到 32 位。
- 长期饮用能量饮料导致一名男子中风
《BMJ Case Reports》期刊本周报告了一起不同寻常的病例:一名男子因长期饮用能量饮料而中风。这名 50 多岁的男子因突发左侧身体完全麻木和共济失调(Ataxia)而送医,他的血压高达 254/150 mm Hg——正常人血压为 120/80。他不吸烟、不喝酒、也不滥用任何药物,身体健康,所有常规检查结果都正常。但脑部扫描显示了动脉痉挛的证据,这与高血压强相关。MRI 扫描显示丘脑有组织坏死。他接受了中风康复治疗,三天后出院时血压降至 170/80 mm Hg,虽然很高但不再危急。该男子在接下来三个月定期复诊,发现血压再次上升,一度因高血压再次住院。医生开始询问他的生活信息,他披露平均每天要喝八罐高效能量饮料。一罐能量饮料标明含有 160 毫克咖啡因,咖啡因是能升高血压的兴奋剂。一杯普通咖啡约含有 90 毫克咖啡因,八罐能量饮料共有 1280 毫克咖啡因,意味着相当于喝 14 杯咖啡。医生指出能量饮料的其它成分也可能具有兴奋剂作用。此前的研究发现连续饮用能量饮料会对血压上升产生累积效应。医生在了解之后建议他停止饮用能量饮料。他照做了,一周后血压就恢复到正常水平,八年之后血压仍然正常。
- 中国卫星与一颗 Starlink 卫星差点相撞
12 月 10 日中科宇航力箭一号遥十一运载火箭在东风发射场成功将 9 颗卫星送入轨道。力箭一号共搭载了阿联酋813 卫星、吉星高分07B01星、吉星高分07C01星、吉星高分07D01星、东坡15号卫星、驭星二号09星、逸仙-A星、SPNEX卫星、Slippers2Sat 卫星 9 颗卫星。SpaceX 周五披露其中一颗卫星与 Starlink 卫星 STARLINK-6079(56120)差点相撞,双方仅相距 200 米。Starlink 工程副总裁 Michael Nicolls 对这次发射没有提前与在轨卫星进行协调表示了不满。中科宇航表示正在调查这起事件。主要是因为 SpaceX,目前轨道上的卫星越来越多,2020 年轨道上正常运行的卫星不到 3400 颗,如今数量超过了 13000 颗,大部分是 SpaceX 的 Starlink 宽带卫星——数量多达 9300 颗,SpaceX 仅今年就发射了逾 3000 颗。
- 中国在近九成的关键技术领域处于领先地位
根据智库澳洲战略政策研究所(Australian Strategic Policy Institute,ASPI)的报告,中国在近九成的关键技术领域处于领先地位。ASPI 评估了 74 项当前和新兴技术领域的研究,中国在核能、合成生物学、小型卫星等 66 项技术的研究上排名第一,美国在量子计算和地球工程等 8 项技术的研究上排名第一。结果显示中美的技术优势出现了显著的反转:21 世纪初美国在九成的技术领域位居第一,而中国则在不到 5% 的领域有优势。ASPI 分析了包含逾 900 万份出版物的数据库,根据过去五年引用次数前 10% 的论文署名作者的国别进行排名。苏州西交利物浦大学政治经济学 Steven Hai 认为这一分析不应被解读为美国实力的坍陷。
- 美国汽车联盟敦促政府阻止中国汽车厂商在美建厂
由通用汽车、福特、丰田、大众、现代、斯特兰蒂斯等主要汽车厂商组成的产业联盟敦促美国政府阻止中国汽车厂商和电池制造商在美建厂,称中国对美国汽车行业构成了“明确而现实的威胁”。汽车产业联盟呼吁国会议员维持禁止从中国进口 IT 技术及服务的禁令——该禁令事实上禁止从中国厂商进口汽车。该组织称,美国汽车制造商和电池生产商在国内的投资,无论规模多大,都无法抵消中国通过补贴支持企业在全球长期过度供应的影响。这种过度供应可能导致倾销,国会和特朗普政府必须防止这种情况在美国市场发生。
- 俄罗斯勒索软件组织用明文储存主密钥
亲俄罗斯黑客组织 CyberVolk 在沉寂数月之后推出了基于 Telegram 的勒索软件即服务 CyberVolk 2.x(aka VolkLocker)。基于 Telegram 的服务降低了准入门槛,但好消息是开发者在测试程序时失误,导致主密钥硬编码在可执行文件中。这意味着受害者无需支付赎金就能解密被加密的文件。VolkLocker 不会动态生成加密密钥,硬编码的主密钥以明文写入 %TEMP% 文件夹。勒索软件被发现使用 AES-256-GCM(Galois/Counter Mode)对文件进行加密。
- 好莱坞导演骗取 Netflix 1100 万美元投资加密货币和购买豪车
好莱坞导演 Carl Rinsch 以执导广告知名,他的首部电影处女作是 2013 年上映的《四十七浪人》,主演包括了基努·里维斯和真田广之。这部电影投资 1.75 亿美元但票房只有 1.5 亿美元,是当年亏损金额最高的电影之一,Rinsch 之后回归执导广告。他与其妻子策划了一部以有机智能为主题的科幻剧集,其创意受到了流媒体公司的青睐,Netflix 获得了版权,同意投资 6120 万美元制作名为《Conquest》的剧集。但《Conquest》的制作过程并不顺利,Netflix 在 2021 年放弃了这部剧集。Rinsch 被控挪用了 1100 万美元,将这些资金转移到个人证券账户,且在两个月内因证券投资亏损逾半。他之后开始投机购买加密货币狗币(Dogecoin),2021 年 5 月套现,获利 2300 万美元,显著改善了个人财务状况。他随后花费 240 万美元购买了五辆劳斯莱斯和一辆法拉利,花 330 万美元购买了家具和古董,38.7 万美元购买了一块瑞士手表。Netflix 已经勾销了 5500 万美元坏账,没有追回任何款项。本周纽约南区法院陪审团裁定这位 48 岁的导演七项罪名成立,他将面临最高 90 年监禁,其判决将于 2026 年 4 月 17 日宣布。
- 科学家绘制全球 97% 建筑物的 3D 地图
科学家绘制了全球 97% 建筑物的 3D 地图。地图 GlobalBuildingAtlas 发布在 GitHub 上,采用 MIT 许可证带公地条款限制(禁止商业出售)。数据集涵盖了 27.5 亿栋建筑物,以 3 米× 3 米的空间分辨率绘制了每栋建筑物的轮廓和高度,该地图可用于灾害风险评估、气候建模和城市规划。研究人员利用深度学习工具,基于 2019 年拍摄的约 80 万张卫星影像,创建了 3D 地图。研究发现,亚洲建筑物数量几乎占全球所有已测绘建筑物的一半,约 12.2 亿栋。亚洲建筑物总体积也位居全球之首,达到 1.27 万亿立方米,这一结果反映了中国、印度和东南亚地区快速城市化和密集的都市区。非洲建筑物数量位居第二,达 5.4 亿栋,但其总体积仅为 1170 亿立方米,以小型低层建筑为主。芬兰人均建筑物体积是希腊的六倍,尼日尔的人均建筑物体积为世界平均水平的 1/27。
- Reddit 指控澳大利亚禁止儿童使用社媒法侵犯自由
Reddit 周五就澳大利亚禁止 16 岁以下青少年使用社交媒体的法律提起诉讼。澳大利亚的这项法律堪称全球首例,已于周三生效,总部位于美国的 Reddit 也在被禁的名单上。Reddit 向澳大利亚高等法院提出对该案复审要求,认为自己作为讨论论坛,不应写入被禁止的社交媒体名单上。在提交给法院的诉讼文件中,Reddit 质疑该法律的合法性,称其“侵犯了政治表达的自由”。Reddit 表示,它同意应该保护16岁以下的青少年儿童。但该法律强势介入,加上可能不安全的验证程序,会使年龄较大的青少年和年轻成年人,失去参与同龄人活动的能力,包括政治讨论。这份声明中还称,“与其他受该法律约束的平台不同,Reddit 绝大多数用户都是成年人,我们不会针对 18 岁以下的儿童进行营销或投放广告。”“简而言之,16 岁以下的用户并非 Reddit 的主要市场群体,我们也无意让他们成为主要市场群体。”
- 蟒蛇在一千万年前就体型巨大
剑桥大学领导的研究团队根据委内瑞拉发现的距今 1240 万年的化石重建了古代水蟒(anacondas),发现这种热带蛇类体长达 5.2 米。1240 万年到 530 万年前的中新世中晚期,由于更温暖的全球气温,广阔的湿地和丰富的食物,很多动物体型远大于其现代近亲。中新世巨型动物如 12 米长的凯门鳄(Purussaurus),3.2 米长的巨型淡水龟(Stupendemys)都已经灭绝,但水蟒作为一种巨型动物仍然存活了下来。研究团队测量了 183 块水蟒脊椎骨化石,它们来自至少 32 条蟒蛇。结果显示古代水蟒体长约为 4-5 米,与现代的水蟒差不多。研究人员认为这展现了蟒蛇超强的适应能力。
- 瑞士考虑将人口上限设置在 1000 万
极右翼政党支持率的不断攀升施压欧洲各国政府加强移民管控。瑞士即将就一项提案进行投票,该提案把移民管控推向了新的高度——设定人口上限。如果瑞士居民人数从目前的约 900 万增至 1000 万以上,那么根据该提案瑞士可能全面禁止新移民入境,不管难民、技术工人还是年薪六位数的高级经理都一视同仁。根据瑞士的全民公投政策,公民预计会在明年就该提案进行投票表决。而民调显示该提案很可能会获得批准。限制人口流动从来不会有利于经济发展,全面限制新移民被认为可能会导致瑞士关键技能人才缺乏,损害国家竞争力。投票结果将展示公民为了维护国家的吸引力而愿意做出怎样的选择。右翼的瑞士人民党(Swiss People's Party)在上次选举中赢得了 28% 的选票,其竞选纲领将瑞士公民身份刻画成一种特权而非权利。该党在 2023 年提出了限制人口数量的设想,将其包装成一种保护瑞士生活方式和防止环境受到过度人类活动破坏的方法。
- 2024 年自由软件奖宣布
自由软件基金会(FSF)在 2025 年底宣布了 2024 年度的自由软件奖得主。社会公益项目奖授予了众包事实核查政府地址、电话号码、网站和社交媒体账号的服务 Govdirectory,此前获得该奖的项目包括了 OpenStreetMap、Public Lab 和 Let's Encryp。杰出新自由软件贡献者奖授予了 GIMP 的贡献者 Alx Sa。自由软件进步奖授予了资深开发者 Andy Wingo,他是 GNU Guile 项目的维护者之一。
- 迪士尼与 Open AI 展开合作
迪士尼改变了其对于 AI 公司使用其版权角色的立场,宣布与 Open AI 达成合作,向 Open AI 投资 10 亿美元,获得额外认股权证的权利。作为协议的一部分,迪士尼允许 Open AI 使用其逾 200 个版权角色制作短视频和图像,这些角色来自迪士尼、漫威、星球大战和皮克斯。新功能预计将于 2026 年通过 OpenAI 的视频生成平台 Sora 和 ChatGPT 推出。部分用户创作的短视频也将会在 Disney+ 上推出。协议不包含任何角色肖像或声音的使用权。迪士尼的员工也将可以使用 OpenAI 工具构建新产品。