OrangeBot.AI Digest — 2025-10-20
57 headlines across 8 sources, aggregated for this day.
Hacker News(15)
- Claude Code on the web (www.anthropic.com)
- Dutch spy services have restricted intelligence-sharing with the United States (intelnews.org)
- AWS outage shows internet users 'at mercy' of too few providers, experts say (www.theguardian.com)
- Chess grandmaster Daniel Naroditsky has died (old.reddit.com)
- Production RAG: what I learned from processing 5M+ documents (blog.abdellatif.io)
- BERT is just a single text diffusion step (nathan.rs)
- Servo v0.0.1 (github.com)
- Alibaba Cloud says it cut Nvidia AI GPU use by 82% with new pooling system (www.tomshardware.com)
- AWS Outage: A Single Cloud Region Shouldn't Take Down the World. But It Did (faun.dev)
- Major AWS outage takes down Fortnite, Alexa, Snapchat, and more (www.theverge.com)
- AWS Multiple Services Down in us-east-1 (health.aws.amazon.com)
- Docker Systems Status: Full Service Disruption (www.dockerstatus.com)
- Major AWS Outage Happening (old.reddit.com)
- Pointer Pointer (2012) (pointerpointer.com)
- DeepSeek OCR (github.com)
GitHub Trending(15)
- anthropics / claude-cookbooks
A collection of notebooks/recipes showcasing some fun and effective ways of using Claude.
- SagerNet / sing-box
The universal proxy platform
- DrewThomasson / ebook2audiobook
Generate audiobooks from e-books, voice cloning & 1107+ languages!
- x1xhlol / system-prompts-and-models-of-ai-tools
FULL Augment Code, Claude Code, Cluely, CodeBuddy, Comet, Cursor, Devin AI, Junie, Kiro, Leap.new, Lovable, Manus Agent Tools, NotionAI, Orchids.app, Perplexity, Poke, Qoder, Replit, Same.dev, Trae, Traycer AI, VSCode Agent, Warp.dev, Windsurf, Xcode, Z.ai Code, dia & v0. (And other Open Sourced) System Prompts, Internal Tools & AI Models
- huggingface / lerobot
🤗 LeRobot: Making AI for Robotics more accessible with end-to-end learning
- wavetermdev / waveterm
An open-source, cross-platform terminal for seamless workflows
- karpathy / micrograd
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
- huggingface / chat-ui
Open source codebase powering the HuggingChat app
- TheAlgorithms / Python
All Algorithms implemented in Python
- lfnovo / open-notebook
An Open Source implementation of Notebook LM with more flexibility and features
- Anuken / Mindustry
The automation tower defense RTS
- Skyvern-AI / skyvern
Automate browser-based workflows with LLMs and Computer Vision
- PaddlePaddle / PaddleOCR
Turn any PDF or image document into structured data for your AI. A powerful, lightweight OCR toolkit that bridges the gap between images/PDFs and LLMs. Supports 100+ languages.
- clockworklabs / SpacetimeDB
Multiplayer at the speed of light
- basecamp / omarchy
Opinionated Arch/Hyprland Setup
Hugging Face(15)
- A Theoretical Study on Bridging Internal Probability and Self-Consistency for LLM Reasoning
Test-time scaling seeks to improve the reasoning performance of large language models (LLMs) by adding computational resources. A prevalent approach within the field is sampling-based test-time scaling methods, which enhance reasoning by generating multiple reasoning paths for a given input during inference. However, despite its practical success, the theoretical foundations remain underexplored. In this paper, we provide the first theoretical framework for analyzing sampling-based test-time scaling methods, grounded in the perspective of confidence estimation. Based on the framework, we analyze two dominant paradigms: self-consistency and perplexity, and reveal key limitations: self-consistency suffers from high estimation error while perplexity exhibits substantial modeling error and possible degradation of the estimation error convergence. To address these limitations, we introduce RPC, a hybrid method that leverages our theoretical insights through two key components: Perplexity Consistency and Reasoning Pruning. Perplexity Consistency combines the strengths of self-consistency and perplexity, boosting the convergence rate of estimation error from linear to exponential while preserving model error. Reasoning Pruning prevents degradation by eliminating low-probability reasoning paths. Both theoretical analysis and empirical results across seven benchmark datasets demonstrate that RPC has a strong potential for reducing reasoning error. Notably, RPC achieves reasoning performance comparable to self-consistency while not only enhancing confidence reliability but also reducing sampling costs by 50%. The code and resources are available at https://wnjxyk.github.io/RPC.
- OmniVinci: Enhancing Architecture and Data for Omni-Modal Understanding LLM
Advancing machine intelligence requires developing the ability to perceive across multiple modalities, much as humans sense the world. We introduce OmniVinci, an initiative to build a strong, open-source, omni-modal LLM. We carefully study the design choices across model architecture and data curation. For model architecture, we present three key innovations: (i) OmniAlignNet for strengthening alignment between vision and audio embeddings in a shared omni-modal latent space; (ii) Temporal Embedding Grouping for capturing relative temporal alignment between vision and audio signals; and (iii) Constrained Rotary Time Embedding for encoding absolute temporal information in omni-modal embeddings. We introduce a curation and synthesis pipeline that generates 24M single-modal and omni-modal conversations. We find that modalities reinforce one another in both perception and reasoning. Our model, OmniVinci, outperforms Qwen2.5-Omni with +19.05 on DailyOmni (cross-modal understanding), +1.7 on MMAR (audio), and +3.9 on Video-MME (vision), while using just 0.2T training tokens - a 6 times reduction compared to Qwen2.5-Omni's 1.2T. We finally demonstrate omni-modal advantages in downstream applications spanning robotics, medical AI, and smart factory.
- NANO3D: A Training-Free Approach for Efficient 3D Editing Without Masks
3D object editing is essential for interactive content creation in gaming, animation, and robotics, yet current approaches remain inefficient, inconsistent, and often fail to preserve unedited regions. Most methods rely on editing multi-view renderings followed by reconstruction, which introduces artifacts and limits practicality. To address these challenges, we propose Nano3D, a training-free framework for precise and coherent 3D object editing without masks. Nano3D integrates FlowEdit into TRELLIS to perform localized edits guided by front-view renderings, and further introduces region-aware merging strategies, Voxel/Slat-Merge, which adaptively preserve structural fidelity by ensuring consistency between edited and unedited areas. Experiments demonstrate that Nano3D achieves superior 3D consistency and visual quality compared with existing methods. Based on this framework, we construct the first large-scale 3D editing datasets Nano3D-Edit-100k, which contains over 100,000 high-quality 3D editing pairs. This work addresses long-standing challenges in both algorithm design and data availability, significantly improving the generality and reliability of 3D editing, and laying the groundwork for the development of feed-forward 3D editing models. Project Page:https://jamesyjl.github.io/Nano3D
- Emergent Misalignment via In-Context Learning: Narrow in-context examples can produce broadly misaligned LLMs
Recent work has shown that narrow finetuning can produce broadly misaligned LLMs, a phenomenon termed emergent misalignment (EM). While concerning, these findings were limited to finetuning and activation steering, leaving out in-context learning (ICL). We therefore ask: does EM emerge in ICL? We find that it does: across three datasets, three frontier models produce broadly misaligned responses at rates between 2% and 17% given 64 narrow in-context examples, and up to 58% with 256 examples. We also examine mechanisms of EM by eliciting step-by-step reasoning (while leaving in-context examples unchanged). Manual analysis of the resulting chain-of-thought shows that 67.5% of misaligned traces explicitly rationalize harmful outputs by adopting a reckless or dangerous ''persona'', echoing prior results on finetuning-induced EM.
- Scaling Instruction-Based Video Editing with a High-Quality Synthetic Dataset
Instruction-based video editing promises to democratize content creation, yet its progress is severely hampered by the scarcity of large-scale, high-quality training data. We introduce Ditto, a holistic framework designed to tackle this fundamental challenge. At its heart, Ditto features a novel data generation pipeline that fuses the creative diversity of a leading image editor with an in-context video generator, overcoming the limited scope of existing models. To make this process viable, our framework resolves the prohibitive cost-quality trade-off by employing an efficient, distilled model architecture augmented by a temporal enhancer, which simultaneously reduces computational overhead and improves temporal coherence. Finally, to achieve full scalability, this entire pipeline is driven by an intelligent agent that crafts diverse instructions and rigorously filters the output, ensuring quality control at scale. Using this framework, we invested over 12,000 GPU-days to build Ditto-1M, a new dataset of one million high-fidelity video editing examples. We trained our model, Editto, on Ditto-1M with a curriculum learning strategy. The results demonstrate superior instruction-following ability and establish a new state-of-the-art in instruction-based video editing.
- Latent Diffusion Model without Variational Autoencoder
Recent progress in diffusion-based visual generation has largely relied on latent diffusion models with variational autoencoders (VAEs). While effective for high-fidelity synthesis, this VAE+diffusion paradigm suffers from limited training efficiency, slow inference, and poor transferability to broader vision tasks. These issues stem from a key limitation of VAE latent spaces: the lack of clear semantic separation and strong discriminative structure. Our analysis confirms that these properties are crucial not only for perception and understanding tasks, but also for the stable and efficient training of latent diffusion models. Motivated by this insight, we introduce SVG, a novel latent diffusion model without variational autoencoders, which leverages self-supervised representations for visual generation. SVG constructs a feature space with clear semantic discriminability by leveraging frozen DINO features, while a lightweight residual branch captures fine-grained details for high-fidelity reconstruction. Diffusion models are trained directly on this semantically structured latent space to facilitate more efficient learning. As a result, SVG enables accelerated diffusion training, supports few-step sampling, and improves generative quality. Experimental results further show that SVG preserves the semantic and discriminative capabilities of the underlying self-supervised representations, providing a principled pathway toward task-general, high-quality visual representations.
- Skyfall-GS: Synthesizing Immersive 3D Urban Scenes from Satellite Imagery
Synthesizing large-scale, explorable, and geometrically accurate 3D urban scenes is a challenging yet valuable task in providing immersive and embodied applications. The challenges lie in the lack of large-scale and high-quality real-world 3D scans for training generalizable generative models. In this paper, we take an alternative route to create large-scale 3D scenes by synergizing the readily available satellite imagery that supplies realistic coarse geometry and the open-domain diffusion model for creating high-quality close-up appearances. We propose Skyfall-GS, the first city-block scale 3D scene creation framework without costly 3D annotations, also featuring real-time, immersive 3D exploration. We tailor a curriculum-driven iterative refinement strategy to progressively enhance geometric completeness and photorealistic textures. Extensive experiments demonstrate that Skyfall-GS provides improved cross-view consistent geometry and more realistic textures compared to state-of-the-art approaches. Project page: https://skyfall-gs.jayinnn.dev/
- Paper2Web: Let's Make Your Paper Alive!
Academic project websites can more effectively disseminate research when they clearly present core content and enable intuitive navigation and interaction. However, current approaches such as direct Large Language Model (LLM) generation, templates, or direct HTML conversion struggle to produce layout-aware, interactive sites, and a comprehensive evaluation suite for this task has been lacking. In this paper, we introduce Paper2Web, a benchmark dataset and multi-dimensional evaluation framework for assessing academic webpage generation. It incorporates rule-based metrics like Connectivity, Completeness and human-verified LLM-as-a-Judge (covering interactivity, aesthetics, and informativeness), and PaperQuiz, which measures paper-level knowledge retention. We further present PWAgent, an autonomous pipeline that converts scientific papers into interactive and multimedia-rich academic homepages. The agent iteratively refines both content and layout through MCP tools that enhance emphasis, balance, and presentation quality. Our experiments show that PWAgent consistently outperforms end-to-end baselines like template-based webpages and arXiv/alphaXiv versions by a large margin while maintaining low cost, achieving the Pareto-front in academic webpage generation.
- LightsOut: Diffusion-based Outpainting for Enhanced Lens Flare Removal
Lens flare significantly degrades image quality, impacting critical computer vision tasks like object detection and autonomous driving. Recent Single Image Flare Removal (SIFR) methods perform poorly when off-frame light sources are incomplete or absent. We propose LightsOut, a diffusion-based outpainting framework tailored to enhance SIFR by reconstructing off-frame light sources. Our method leverages a multitask regression module and LoRA fine-tuned diffusion model to ensure realistic and physically consistent outpainting results. Comprehensive experiments demonstrate LightsOut consistently boosts the performance of existing SIFR methods across challenging scenarios without additional retraining, serving as a universally applicable plug-and-play preprocessing solution. Project page: https://ray-1026.github.io/lightsout/
- A^2FM: An Adaptive Agent Foundation Model for Tool-Aware Hybrid Reasoning
Large language models split into two families: reasoning-centric LLMs, which strengthen internal chain-of-thought reasoning but cannot invoke external tools, and agentic LLMs, which learn to interact with environments and leverage tools but often lag in deep reasoning. This divide arises from fundamentally different training objectives, leading to mismatched strengths and inefficiency on simple queries, where both families tend to overthink or over-call tools. In this work, we present Adaptive Agent Foundation Model (A^2FM), a unified framework that follows a route-then-align principle: the model first learns task-aware routing and then aligns mode-specific trajectories under a shared backbone. To address the inefficiency gap, we introduce a third mode-instant-that handles simple queries directly, preventing unnecessary reasoning or tool calls while complementing the agentic and reasoning modes. To jointly enhance accuracy and efficiency, we propose Adaptive Policy Optimization (APO), which enforces adaptive sampling across modes and applies a cost-regularized reward. On the 32B scale, A^2FM achieves 13.4% on BrowseComp, 70.4% on AIME25, and 16.7% on HLE, setting new SOTA among comparable models and performing competitively with frontier LLMs across agentic, reasoning, and general benchmarks. Notably, the adaptive execution achieves a cost of pass of only $0.00487 per correct answer-cutting cost by 45.2% relative to reasoning and 33.5% relative to agentic, thus delivering substantially higher cost efficiency while maintaining comparable accuracy.
- MorphoBench: A Benchmark with Difficulty Adaptive to Model Reasoning
With the advancement of powerful large-scale reasoning models, effectively evaluating the reasoning capabilities of these models has become increasingly important. However, existing benchmarks designed to assess the reasoning abilities of large models tend to be limited in scope and lack the flexibility to adapt their difficulty according to the evolving reasoning capacities of the models. To address this, we propose MorphoBench, a benchmark that incorporates multidisciplinary questions to evaluate the reasoning capabilities of large models and can adjust and update question difficulty based on the reasoning abilities of advanced models. Specifically, we curate the benchmark by selecting and collecting complex reasoning questions from existing benchmarks and sources such as Olympiad-level competitions. Additionally, MorphoBench adaptively modifies the analytical challenge of questions by leveraging key statements generated during the model's reasoning process. Furthermore, it includes questions generated using simulation software, enabling dynamic adjustment of benchmark difficulty with minimal resource consumption. We have gathered over 1,300 test questions and iteratively adjusted the difficulty of MorphoBench based on the reasoning capabilities of models such as o3 and GPT-5. MorphoBench enhances the comprehensiveness and validity of model reasoning evaluation, providing reliable guidance for improving both the reasoning abilities and scientific robustness of large models. The code has been released in https://github.com/OpenDCAI/MorphoBench.
- Language Models Model Language
Linguistic commentary on LLMs, heavily influenced by the theoretical frameworks of de Saussure and Chomsky, is often speculative and unproductive. Critics challenge whether LLMs can legitimately model language, citing the need for "deep structure" or "grounding" to achieve an idealized linguistic "competence." We argue for a radical shift in perspective towards the empiricist principles of Witold Ma\'nczak, a prominent general and historical linguist. He defines language not as a "system of signs" or a "computational system of the brain" but as the totality of all that is said and written. Above all, he identifies frequency of use of particular language elements as language's primary governing principle. Using his framework, we challenge prior critiques of LLMs and provide a constructive guide for designing, evaluating, and interpreting language models.
- BLIP3o-NEXT: Next Frontier of Native Image Generation
We present BLIP3o-NEXT, a fully open-source foundation model in the BLIP3 series that advances the next frontier of native image generation. BLIP3o-NEXT unifies text-to-image generation and image editing within a single architecture, demonstrating strong image generation and image editing capabilities. In developing the state-of-the-art native image generation model, we identify four key insights: (1) Most architectural choices yield comparable performance; an architecture can be deemed effective provided it scales efficiently and supports fast inference; (2) The successful application of reinforcement learning can further push the frontier of native image generation; (3) Image editing still remains a challenging task, yet instruction following and the consistency between generated and reference images can be significantly enhanced through post-training and data engine; (4) Data quality and scale continue to be decisive factors that determine the upper bound of model performance. Building upon these insights, BLIP3o-NEXT leverages an Autoregressive + Diffusion architecture in which an autoregressive model first generates discrete image tokens conditioned on multimodal inputs, whose hidden states are then used as conditioning signals for a diffusion model to generate high-fidelity images. This architecture integrates the reasoning strength and instruction following of autoregressive models with the fine-detail rendering ability of diffusion models, achieving a new level of coherence and realism. Extensive evaluations of various text-to-image and image-editing benchmarks show that BLIP3o-NEXT achieves superior performance over existing models.
- Foundation Models for Scientific Discovery: From Paradigm Enhancement to Paradigm Transition
Foundation models (FMs), such as GPT-4 and AlphaFold, are reshaping the landscape of scientific research. Beyond accelerating tasks such as hypothesis generation, experimental design, and result interpretation, they prompt a more fundamental question: Are FMs merely enhancing existing scientific methodologies, or are they redefining the way science is conducted? In this paper, we argue that FMs are catalyzing a transition toward a new scientific paradigm. We introduce a three-stage framework to describe this evolution: (1) Meta-Scientific Integration, where FMs enhance workflows within traditional paradigms; (2) Hybrid Human-AI Co-Creation, where FMs become active collaborators in problem formulation, reasoning, and discovery; and (3) Autonomous Scientific Discovery, where FMs operate as independent agents capable of generating new scientific knowledge with minimal human intervention. Through this lens, we review current applications and emerging capabilities of FMs across existing scientific paradigms. We further identify risks and future directions for FM-enabled scientific discovery. This position paper aims to support the scientific community in understanding the transformative role of FMs and to foster reflection on the future of scientific discovery. Our project is available at https://github.com/usail-hkust/Awesome-Foundation-Models-for-Scientific-Discovery.
- Build Your Personalized Research Group: A Multiagent Framework for Continual and Interactive Science Automation
The automation of scientific discovery represents a critical milestone in Artificial Intelligence (AI) research. However, existing agentic systems for science suffer from two fundamental limitations: rigid, pre-programmed workflows that cannot adapt to intermediate findings, and inadequate context management that hinders long-horizon research. We present freephdlabor, an open-source multiagent framework featuring fully dynamic workflows determined by real-time agent reasoning and a \textit{modular architecture} enabling seamless customization -- users can modify, add, or remove agents to address domain-specific requirements. The framework provides comprehensive infrastructure including automatic context compaction, workspace-based communication to prevent information degradation, memory persistence across sessions, and non-blocking human intervention mechanisms. These features collectively transform automated research from isolated, single-run attempts into continual research programs that build systematically on prior explorations and incorporate human feedback. By providing both the architectural principles and practical implementation for building customizable co-scientist systems, this work aims to facilitate broader adoption of automated research across scientific domains, enabling practitioners to deploy interactive multiagent systems that autonomously conduct end-to-end research -- from ideation through experimentation to publication-ready manuscripts.
Solidot(12)
- AWS 宕机影响亚马逊和《堡垒之夜》等游戏
亚马逊 AWS 发生严重宕机事故,影响了数以百万计的网站和服务,包括亚马逊自己、PrimeVideo、Perplexity AI、Canva 等网站以及《堡垒之夜》等游戏。亚马逊在其 AWS 状态页面发表声明,称确认 US-EAST-1 区域DynamoDB 端点的请求存在严重的错误率,工程师正致力于缓解问题和全面理解问题根因。
- Xubuntu 官网被嵌入窃取加密货币的恶意程序
使用 Xfce 桌面环境的 Ubuntu Linux 衍生发行版 Xubuntu 官网遭到黑客入侵,黑客在下载页面提供了一个 zip 文件,其中包含了一个可疑的 exe 文件和一个 tos.txt 文件. 该文件的版权声明是 Copyright (c) 2026 Xubuntu.org。黑客嵌入的恶意程序旨在窃取加密货币,方法是扫描剪切板上的加密货币地址,然后替换黑客控制的钱包地址。扫描的加密货币包括比特币、莱特币、以太坊和狗币等。
- Windows 11 更新破坏了 Recovery Environment
Windows Recovery Environment(WinRE)是 Windows 的恢复环境,用于在启动失败之后对计算机进行故障排除,包括启动到 BIOS 或以安全模式启动计算机。但 Windows 11 十月更新 KB5066835 存在 bug,会导致在 WinRE 下 USB 键盘和鼠标失去响应,这将导致 WinRE 对大部分用户而言失去了作用。PS/2 接口的键盘和鼠标不受影响,但今天的计算机几乎不再使用此类接口的外设。微软表示正在开发修复程序解决该问题。
- 日本电商巨头 ASKUL 遭勒索软件攻击
日本电商巨头 爱速客乐(ASKUL)19 日宣布,因遭到网络攻击导致系统故障,已暂停商品的订单受理及出货业务。此次故障的原因是感染了勒索软件。运营“无印良品”的良品计划公司也于 20 日透露,19 日夜间暂停了网店销售等业务。良品计划将部分配送业务委托给爱速客乐子公司,物流出现了障碍。爱速客乐 19 日上午发现感染病毒,目前尚无修复的眉目。包括是否有个人信息和顾客数据外泄在内,公司正在调查影响范围。爱速客乐表示,“发生了给大家添麻烦、让大家担心的事态,在此致歉。”
- GIMP 正式提供 Snap 打包的版本
GIMP 正式提供 Snap 打包的版本。它已经提供了 Flatpak 包和微软 Windows 商店的 MSIX 包。Snap 是 Canonical 主导的软件包格式,GIMP 此前非官方的 Snap 软件包由下游项目 Snapcrafters 维护,GIMP 联系了 Snapcrafters,对方欣然同意转让所有权。GIMP 3.0.6 是第一个由官方维护的 Snap 版本。
- 英伟达展示首块美制 Blackwell 芯片
英伟达和台积电在美国亚利桑那州凤凰城展示了首块美制 Blackwell 芯片。Blackwell 是英伟达最新一代的 GPU 架构,GeForce RTX 5000 系列显卡使用的就是 Blackwell 架构。英伟达 CEO 黄仁勋表示,这是一个历史性的时刻,原因有很多。这是美国近代史上第一次,最重要的芯片在美国最先进的晶圆厂台积电生产出来。Blackwell 将于台积电亚利桑那州一期晶圆厂 Fab 21 生产,该基地规划量产 2 纳米、3 纳米、4 纳米与 A16 等先进制程。英伟达表示,随着 Blackwell 芯片开始在台积电亚利桑那州厂区生产,它将能更有效地自我隔绝于不断变化的关税情势与地缘政治紧张局势。
- 7-Zip 远程代码执行漏洞 PoC 公开
流行解压缩软件 7-Zip 在 10 月 7 日公开了两个允许攻击者通过恶意 ZIP 文件导致远程执行任意代码的漏洞 CVE-2025-11001 和 CVE-2025-11002,虽然 CVSS v3.0 评分为 7.0,但潜在影响巨大。受影响的版本是 7-Zip 21.02 到 24.09,根源是符号链接转换过程中的缺陷会导致路径遍历攻击。安全研究员 pacbypass 在 GitHub 上公开 PoC 概念验证攻击代码,7-Zip 用户最好立即更新到 v25.00。
- GPT-5 并不能证明未解决的数学问题
OpenAI 经理 Kevin Weil 声称 GPT-5 模型找到了 10 个未解决 Erdős 问题的解决方案。运营 Erdős 问题网站 erdosproblems.com 的数学家 Thomas Bloom 随后澄清,所谓未解决 Erdős 问题只是他不知道答案,并不是真的未解决。Deepmind CEO Demis Hassabis 表示此事令人尴尬。Meta AI CEO Yann LeCun 则指出 OpenAI 被自己的 AI 叙事给蒙蔽了。OpenAI 研究员之后删除了相关推文。数学家陶哲轩称,AI 在数学领域的最大潜力不是去解决棘手的难题,而是加速相关文献的检索,节省数学家的时间。
- 维基百科志愿者制服试图在大会上自杀的持枪男子
纽约曼哈顿举行的维基百科大会发生了一次惊险事故:一位持枪男子走上演讲台宣称要自杀,以抗议维基百科保护儿童的政策,现场的两名维基百科志愿者制服了枪手,避免了悲剧的发生。这位男子身披彩色旗帜走上舞台,打断了维基媒体基金会 CEO Maryana Iskander 的发言,表示要自杀以抗议禁止自称恋童癖者的政策。他把枪举到头顶指向天花板,现场百名与会者陷入了恐慌。担任会议信任和安全团队的志愿者 Richard Knipel 与 Andrew Lih 冲上去制服了枪手,撬开他的手指,取走了他的武器,这把枪当时已经上膛,两名志愿者意外成为了英雄。
- 小鼠研究显示新疗法能在数小时内清除大脑中与痴呆症相关的斑块
根据发表在《Signal Transduction and Targeted Therapy》期刊上的一项研究,小鼠研究显示阿尔茨海默病新疗法能在数小时内清除大脑中的斑块。仅仅三次注射就逆转了小鼠大脑中类似阿尔茨海默病的关键病理特征。在第一次注射数小时后,小鼠大脑中的 β淀粉样蛋白斑块减少了近 45%。出现认知功能下降的小鼠在三次注射后其空间学习和记忆任务表现与健康小鼠相当,并持续了至少六个月。小鼠身上的结果未必能在人类身上复制,但研究作者表示这是一个令人鼓舞的开始,预示着药物研究的一个新时代。这项研究由 Institute for Bioengineering of Catalonia (IBEC)和四川大学华西医院共同完成。研究是基于一项假设:阿尔茨海默病患者大脑中血脑屏障被削弱或受损了,从而导致了废弃物在大脑中堆积。研究人员利用纳米颗粒修复血脑屏障,让β淀粉样蛋白更容易清理出去。
- 物理学家杨振宁去世,享年 103 岁
著名物理学家、诺贝尔物理学奖得主杨振宁于 10 月 18 日在北京去世,享年 103 岁。杨振宁于 1922 年出生于安徽合肥,抗日战争时期在西南联合大学读本科、硕士,后赴美念博士,他与同是华裔物理学家的李政道于 1956 年共同提出宇称不守恒理论,因而获得 1957 年诺贝尔物理学奖,成为最早华人诺奖得主之一。1954 年杨振宁同米尔斯创立了“杨-米尔斯规范场论”,把电磁作用是由定域规范不变性所决定的观念推广到对易性的定域对称群,提出具有定域同位旋不变性的理论。
- 维基百科称 AI 导致人类访问量下降
今年早些时候的数据显示,自 2020 年 4 月以来,著名编程问答社 区Stack Overflow 上的问答量下降了九成以上,一大原因是 AI 聊天机器人的兴起。最大的在线百科全书透露它也遇到了类似的问题。管理维基百科的维基媒体基金会称,因为越来越多的人通过 AI 聊天机器人和无需点击的搜索引擎获取信息,维基百科的访问量显著下降。基金会产品高级总监 Marshall Miller 在 Diff 博客上称,过去几个月维基百科页面的人类浏览量比 2024 年同期下降了约 8%。他对维基百科的长期可维持性表达了担忧。他说,访问量的减少,可能会导致更少的志愿编辑去丰富内容,进而可能导致个人捐助者减少。