Weekly Digest — 2025-W50
140 unique stories (2025-12-08 → 2025-12-14), aggregated across 8 sources.
Hacker News(41)
- Deep dive on Nvidia circular funding (philippeoger.com)
- Jepsen: NATS 2.12.1 (jepsen.io)
- Microsoft has a problem: lack of demand for its AI products (www.windowscentral.com)
- Strong earthquake hits northern Japan, tsunami warning issued (www3.nhk.or.jp)
- Hunting for North Korean Fiber Optic Cables (nkinternet.com)
- Let's put Tailscale on a jailbroken Kindle (tailscale.com)
- 10 Years of Let's Encrypt (letsencrypt.org)
- PeerTube is recognized as a digital public good by Digital Public Goods Alliance (www.digitalpublicgoods.net)
- If you're going to vibe code, why not do it in C? (stephenramsay.net)
- Handsdown one of the coolest 3D websites (bruno-simon.com)
- Show HN: Gemini Pro 3 hallucinates the HN front page 10 years from now (dosaygo-studio.github.io)
- Ask HN: Should "I asked $AI, and it said" replies be forbidden in HN guidelines?
GitHub Trending(28)
- microsoft / VibeVoice
Open-Source Frontier Voice AI
- sinelaw / fresh
Text editor for your terminal: easy, powerful and fast
- winapps-org / winapps
Run Windows apps such as Microsoft Office/Adobe in Linux (Ubuntu/Fedora) and GNOME/KDE as if they were a part of the native OS, including Nautilus integration. Hard fork of https://github.com/Fmstrat/winapps/
- patchy631 / ai-engineering-hub
In-depth tutorials on LLMs, RAGs and real-world AI agent applications.
- slidevjs / slidev
Presentation Slides for Developers
- cloudflare / vibesdk
An open-source vibe coding platform that helps you build your own vibe-coding platform, built entirely on Cloudflare stack
- KaijuEngine / kaiju
General purpose 3D and 2D game engine using Go (golang) and Vulkan with built in editor
- thedotmack / claude-mem
A Claude Code plugin that automatically captures everything Claude does during your coding sessions, compresses it with AI (using Claude's agent-sdk), and injects relevant context back into future sessions.
- dyad-sh / dyad
Free, local, open-source AI app builder ✨ v0 / lovable / Bolt alternative 🌟 Star if you like it!
- NVIDIA / cutile-python
cuTile is a programming model for writing parallel kernels for NVIDIA GPUs
- google / adk-samples
A collection of sample agents built with Agent Development Kit (ADK)
- agentsmd / agents.md
AGENTS.md — a simple, open format for guiding coding agents
Hugging Face(31)
- TwinFlow: Realizing One-step Generation on Large Models with Self-adversarial Flows
Recent advances in large multi-modal generative models have demonstrated impressive capabilities in multi-modal generation, including image and video generation. These models are typically built upon multi-step frameworks like diffusion and flow matching, which inherently limits their inference efficiency (requiring 40-100 Number of Function Evaluations (NFEs)). While various few-step methods aim to accelerate the inference, existing solutions have clear limitations. Prominent distillation-based methods, such as progressive and consistency distillation, either require an iterative distillation procedure or show significant degradation at very few steps (< 4-NFE). Meanwhile, integrating adversarial training into distillation (e.g., DMD/DMD2 and SANA-Sprint) to enhance performance introduces training instability, added complexity, and high GPU memory overhead due to the auxiliary trained models. To this end, we propose TwinFlow, a simple yet effective framework for training 1-step generative models that bypasses the need of fixed pretrained teacher models and avoids standard adversarial networks during training, making it ideal for building large-scale, efficient models. On text-to-image tasks, our method achieves a GenEval score of 0.83 in 1-NFE, outperforming strong baselines like SANA-Sprint (a GAN loss-based framework) and RCGM (a consistency-based framework). Notably, we demonstrate the scalability of TwinFlow by full-parameter training on Qwen-Image-20B and transform it into an efficient few-step generator. With just 1-NFE, our approach matches the performance of the original 100-NFE model on both the GenEval and DPG-Bench benchmarks, reducing computational cost by 100times with minor quality degradation. Project page is available at https://zhenglin-cheng.com/twinflow.
- EditThinker: Unlocking Iterative Reasoning for Any Image Editor
Instruction-based image editing has emerged as a prominent research area, which, benefiting from image generation foundation models, have achieved high aesthetic quality, making instruction-following capability the primary challenge. Existing approaches improve instruction adherence via supervised or reinforcement learning, yet single-turn success rates remain limited due to inherent stochasticity and a lack of deliberation. In this work, we propose a deliberative editing framework to 'think' while they edit, which simulates the human cognitive loop by iteratively executing a Think-while-Edit cycle: Critiquing results and Refining instructions , followed by Repeating the generation until satisfactory. Specifically, we train a single MLLM, EditThinker, to act as the reasoning engine of this framework, which jointly produce the critique score, reasoning process, and refined instructions. We employ reinforcement learning to align the EditThinker's thinking with its editing, thereby generating more targeted instruction improvements. Extensive experiments on four benchmarks demonstrate that our approach significantly improves the instruction-following capability of any image editing model by a large margin. We will release our data construction framework, datasets, and models to benefit the community.
- From Imitation to Discrimination: Toward A Generalized Curriculum Advantage Mechanism Enhancing Cross-Domain Reasoning Tasks
Reinforcement learning has emerged as a paradigm for post-training large language models, boosting their reasoning capabilities. Such approaches compute an advantage value for each sample, reflecting better or worse performance than expected, thereby yielding both positive and negative signals for training. However, the indiscriminate mixing of the two signals in existing methods, especially from the early stages, may lead to ambiguous guidance and limited gains. To address this issue, we propose **CAPO** (**C**urriculum **A**dvantage **P**olicy **O**ptimization), an adaptive curriculum mechanism based on advantage signals. The proposed mechanism bootstraps imitation learning with positive-only advantage samples to establish robust foundations, and subsequently introduces negative signals to cultivate discriminative capabilities, thereby improving generalization across complex scenarios. Compatible with diverse optimization methods including GRPO, PPO, RLOO, and Reinforce++, our method consistently achieves stable and significant improvements in mathematical reasoning tasks, and further generalizes effectively to multimodal Graphical User Interface (GUI) reasoning scenarios, establishing itself as a versatile and robust optimization framework.
- EMMA: Efficient Multimodal Understanding, Generation, and Editing with a Unified Architecture
We propose EMMA, an efficient and unified architecture for multimodal understanding, generation and editing. Specifically, EMMA primarily consists of 1) An efficient autoencoder with a 32x compression ratio, which significantly reduces the number of tokens required for generation. This also ensures the training balance between understanding and generation tasks by applying the same compression ratio to images. 2) Channel-wise concatenation instead of token-wise concatenation among visual understanding and generation tokens, which further reduces the visual tokens in unified architectures. 3) A shared-and-decoupled network that enables mutual improvements across tasks while meeting the task-specific modeling requirements. 4) A mixture-of-experts mechanism adopted for visual understanding encoder, which substantially improves perceptual capabilities with a few parameters increase. Extensive experiments have shown that EMMA-4B can significantly outperform state-of-the-art unified multimodal approaches (e.g., BAGEL-7B) in both efficiency and performance, while also achieving competitive results compared to recent multimodal understanding and generation experts (e.g., Qwen3-VL and Qwen-Image). We believe that EMMA lays a solid foundation for the future development of unified multimodal architectures.
- PaCo-RL: Advancing Reinforcement Learning for Consistent Image Generation with Pairwise Reward Modeling
Consistent image generation requires faithfully preserving identities, styles, and logical coherence across multiple images, which is essential for applications such as storytelling and character design. Supervised training approaches struggle with this task due to the lack of large-scale datasets capturing visual consistency and the complexity of modeling human perceptual preferences. In this paper, we argue that reinforcement learning (RL) offers a promising alternative by enabling models to learn complex and subjective visual criteria in a data-free manner. To achieve this, we introduce PaCo-RL, a comprehensive framework that combines a specialized consistency reward model with an efficient RL algorithm. The first component, PaCo-Reward, is a pairwise consistency evaluator trained on a large-scale dataset constructed via automated sub-figure pairing. It evaluates consistency through a generative, autoregressive scoring mechanism enhanced by task-aware instructions and CoT reasons. The second component, PaCo-GRPO, leverages a novel resolution-decoupled optimization strategy to substantially reduce RL cost, alongside a log-tamed multi-reward aggregation mechanism that ensures balanced and stable reward optimization. Extensive experiments across the two representative subtasks show that PaCo-Reward significantly improves alignment with human perceptions of visual consistency, and PaCo-GRPO achieves state-of-the-art consistency performance with improved training efficiency and stability. Together, these results highlight the promise of PaCo-RL as a practical and scalable solution for consistent image generation. The project page is available at https://x-gengroup.github.io/HomePage_PaCo-RL/.
- Entropy Ratio Clipping as a Soft Global Constraint for Stable Reinforcement Learning
Large language model post-training relies on reinforcement learning to improve model capability and alignment quality. However, the off-policy training paradigm introduces distribution shift, which often pushes the policy beyond the trust region, leading to training instabilities manifested as fluctuations in policy entropy and unstable gradients. Although PPO-Clip mitigates this issue through importance clipping, it still overlooks the global distributional shift of actions. To address these challenges, we propose using the entropy ratio between the current and previous policies as a new global metric that effectively quantifies the relative change in policy exploration throughout updates. Building on this metric, we introduce an Entropy Ratio Clipping (ERC) mechanism that imposes bidirectional constraints on the entropy ratio. This stabilizes policy updates at the global distribution level and compensates for the inability of PPO-clip to regulate probability shifts of un-sampled actions. We integrate ERC into both DAPO and GPPO reinforcement learning algorithms. Experiments across multiple benchmarks show that ERC consistently improves performance.
- Native Parallel Reasoner: Reasoning in Parallelism via Self-Distilled Reinforcement Learning
We introduce Native Parallel Reasoner (NPR), a teacher-free framework that enables Large Language Models (LLMs) to self-evolve genuine parallel reasoning capabilities. NPR transforms the model from sequential emulation to native parallel cognition through three key innovations: 1) a self-distilled progressive training paradigm that transitions from ``cold-start'' format discovery to strict topological constraints without external supervision; 2) a novel Parallel-Aware Policy Optimization (PAPO) algorithm that optimizes branching policies directly within the execution graph, allowing the model to learn adaptive decomposition via trial and error; and 3) a robust NPR Engine that refactors memory management and flow control of SGLang to enable stable, large-scale parallel RL training. Across eight reasoning benchmarks, NPR trained on Qwen3-4B achieves performance gains of up to 24.5% and inference speedups up to 4.6x. Unlike prior baselines that often fall back to autoregressive decoding, NPR demonstrates 100% genuine parallel execution, establishing a new standard for self-evolving, efficient, and scalable agentic reasoning.
- Beyond Real: Imaginary Extension of Rotary Position Embeddings for Long-Context LLMs
Rotary Position Embeddings (RoPE) have become a standard for encoding sequence order in Large Language Models (LLMs) by applying rotations to query and key vectors in the complex plane. Standard implementations, however, utilize only the real component of the complex-valued dot product for attention score calculation. This simplification discards the imaginary component, which contains valuable phase information, leading to a potential loss of relational details crucial for modeling long-context dependencies. In this paper, we propose an extension that re-incorporates this discarded imaginary component. Our method leverages the full complex-valued representation to create a dual-component attention score. We theoretically and empirically demonstrate that this approach enhances the modeling of long-context dependencies by preserving more positional information. Furthermore, evaluations on a suite of long-context language modeling benchmarks show that our method consistently improves performance over the standard RoPE, with the benefits becoming more significant as context length increases. The code is available at https://github.com/OpenMOSS/rope_pp.
- Unified Video Editing with Temporal Reasoner
Existing video editing methods face a critical trade-off: expert models offer precision but rely on task-specific priors like masks, hindering unification; conversely, unified temporal in-context learning models are mask-free but lack explicit spatial cues, leading to weak instruction-to-region mapping and imprecise localization. To resolve this conflict, we propose VideoCoF, a novel Chain-of-Frames approach inspired by Chain-of-Thought reasoning. VideoCoF enforces a ``see, reason, then edit" procedure by compelling the video diffusion model to first predict reasoning tokens (edit-region latents) before generating the target video tokens. This explicit reasoning step removes the need for user-provided masks while achieving precise instruction-to-region alignment and fine-grained video editing. Furthermore, we introduce a RoPE alignment strategy that leverages these reasoning tokens to ensure motion alignment and enable length extrapolation beyond the training duration. We demonstrate that with a minimal data cost of only 50k video pairs, VideoCoF achieves state-of-the-art performance on VideoCoF-Bench, validating the efficiency and effectiveness of our approach. Our code, weight, data are available at https://github.com/knightyxp/VideoCoF.
- Voxify3D: Pixel Art Meets Volumetric Rendering
Voxel art is a distinctive stylization widely used in games and digital media, yet automated generation from 3D meshes remains challenging due to conflicting requirements of geometric abstraction, semantic preservation, and discrete color coherence. Existing methods either over-simplify geometry or fail to achieve the pixel-precise, palette-constrained aesthetics of voxel art. We introduce Voxify3D, a differentiable two-stage framework bridging 3D mesh optimization with 2D pixel art supervision. Our core innovation lies in the synergistic integration of three components: (1) orthographic pixel art supervision that eliminates perspective distortion for precise voxel-pixel alignment; (2) patch-based CLIP alignment that preserves semantics across discretization levels; (3) palette-constrained Gumbel-Softmax quantization enabling differentiable optimization over discrete color spaces with controllable palette strategies. This integration addresses fundamental challenges: semantic preservation under extreme discretization, pixel-art aesthetics through volumetric rendering, and end-to-end discrete optimization. Experiments show superior performance (37.12 CLIP-IQA, 77.90\% user preference) across diverse characters and controllable abstraction (2-8 colors, 20x-50x resolutions). Project page: https://yichuanh.github.io/Voxify-3D/
- Scaling Zero-Shot Reference-to-Video Generation
Reference-to-video (R2V) generation aims to synthesize videos that align with a text prompt while preserving the subject identity from reference images. However, current R2V methods are hindered by the reliance on explicit reference image-video-text triplets, whose construction is highly expensive and difficult to scale. We bypass this bottleneck by introducing Saber, a scalable zero-shot framework that requires no explicit R2V data. Trained exclusively on video-text pairs, Saber employs a masked training strategy and a tailored attention-based model design to learn identity-consistent and reference-aware representations. Mask augmentation techniques are further integrated to mitigate copy-paste artifacts common in reference-to-video generation. Moreover, Saber demonstrates remarkable generalization capabilities across a varying number of references and achieves superior performance on the OpenS2V-Eval benchmark compared to methods trained with R2V data.
- DoVer: Intervention-Driven Auto Debugging for LLM Multi-Agent Systems
Large language model (LLM)-based multi-agent systems are challenging to debug because failures often arise from long, branching interaction traces. The prevailing practice is to leverage LLMs for log-based failure localization, attributing errors to a specific agent and step. However, this paradigm has two key limitations: (i) log-only debugging lacks validation, producing untested hypotheses, and (ii) single-step or single-agent attribution is often ill-posed, as we find that multiple distinct interventions can independently repair the failed task. To address the first limitation, we introduce DoVer, an intervention-driven debugging framework, which augments hypothesis generation with active verification through targeted interventions (e.g., editing messages, altering plans). For the second limitation, rather than evaluating on attribution accuracy, we focus on measuring whether the system resolves the failure or makes quantifiable progress toward task success, reflecting a more outcome-oriented view of debugging. Within the Magnetic-One agent framework, on the datasets derived from GAIA and AssistantBench, DoVer flips 18-28% of failed trials into successes, achieves up to 16% milestone progress, and validates or refutes 30-60% of failure hypotheses. DoVer also performs effectively on a different dataset (GSMPlus) and agent framework (AG2), where it recovers 49% of failed trials. These results highlight intervention as a practical mechanism for improving reliability in agentic systems and open opportunities for more robust, scalable debugging methods for LLM-based multi-agent systems. Project website and code will be available at https://aka.ms/DoVer.
Solidot(40)
- 欧盟对 X 罚款 1.2 亿欧元,X 封杀欧盟广告账户
欧盟委员会上周五根据《数字服务法》(Digital Services Act)对马斯克(Elon Musk)旗下的 X/Twitter 平台处以 1.2 亿欧元的罚款,理由包括 X 违反了欧盟透明度规定、提供的数据访问权限不足,以及其认证账户的蓝勾设计具有欺骗性——该公司并没有真正验证用户身份而是只要付钱就行。马斯克随后抨击欧盟应该废除,而 X 的高级官员 Nikita Bier 则宣布封禁欧盟的广告账户,声称欧盟试图利用其广告系统中的“漏洞”宣传上周五发布的罚款推文。欧盟委员会发言人对此回应称他们只是在使用 X 提供给企业账户的工具。
- JavaScript 诞生三十年
30 年前的 12 月 4 日,Netscape Communications 和 Sun Microsystems 发表新闻稿,正式宣布推出设计用于创建交互式 Web 应用的对象脚本语言 JavaScript。Netscape 工程师 Brendan Eich 在 1995 年 5 月的 10 天内冲刺开发出了一个内部原型,1996 年 3 月发布了 JavaScript 的 1.0 版本。30 年后的今天 JavaScript 运行在 98.9% 的支持客户端代码的网站上,是 Web 领域最具支配性的编程语言。除浏览器之外,JavaScript 还驱动着服务器后端、移动应用、桌面软件,甚至部分嵌入式系统。JavaScript 一直是全球使用最广泛的语言之一。包括 Netscape 和 Sun 在内的众多最早支持 JavaScript 的科技公司基本都已经消失,而 JavaScript 比它们都活得更久。JavaScript 使用过多个名字,最早叫 Mocha,然后改为 LiveScript,12 月 Netscape 和 Sun 签署授权协议正式将其命名为 JavaScript。JavaScript 与 Sun 的 Java 语言一度引起混淆和困惑,其实除了名字和部分语法规范,两者基本上毫无关系。甲骨文在收购 Sun 之后继承了 JavaScript 商标,但从未使用 JavaScript 名字构建产品,Brendan Eich 等人在一封公开信中认为甲骨文因从未使用而放弃了该商标,因此 JavaScript 成为一个通用术语。
- 常用抗抑郁药显著降低男性家暴率
家暴是一个全球性问题。澳大利亚研究人员调查了常用抗抑郁药舍曲林(Sertraline)减少家暴的效果。研究人员从新南威尔士州 1738 名男性中随机挑选出 630 人,分别让他们服用舍曲林或安慰剂。这些人大多数都是有家暴前科的,是从社区矫正机构和法院招募来的。舍曲林是通过增强大脑中血清素功能去发挥作用,而血清素在调节冲动控制和情绪反应上发挥重要作用,因此这有助于缓解暴力行为中的一个关键驱动因素——无法冷静下来控制情绪。结果显示,服用 12 个月后舍曲林组再犯率(19.1%)低于安慰剂组(24.8%);服用 24 个月后舍曲林组再犯率(28.2%)低于安慰剂组(35.7%)。服药更规律的男性 24 个月后再犯率降低 30%。
- 俄罗斯所有保时捷因卫星连接中断而都无法使用
俄罗斯保时捷车主遭遇了汽车启动无反应,发动机不转,仪表盘指示灯不亮等众多问题,就好像汽车变砖了。这一问题最早是在 11 月底报道的。俄罗斯最大的保时捷经销商 Rolf 证实,问题源于汽车配备的跟踪系统(Vehicle Tracking System 或 VTS)与卫星完全切断连接。VTS 是基于卫星的防盗系统,当卫星连接中断后,系统会认为汽车可能被盗,因此激活了防盗功能,切断燃油供应并且完全锁定发动机。问题影响所有安装了 VTS 的保时捷车型。
- 南非企鹅因食物短缺大规模饿死
非洲企鹅每年都会换羽,脱落磨损的旧羽毛换上新羽毛,以保持羽毛的保暖和防水性能。换羽期间企鹅生活在陆地上三周无法捕猎,因此演化出储存脂肪的机制,利用自身脂肪度过换羽禁食期。换羽之后它需要迅速捕捉食物以恢复体能。如果换羽前后没有找到足够的食物,它们会难以生存。南非研究人员报告,在 2004-2011 年间南非西部沿海的沙丁鱼储量低于其峰值的四分之一(主要原因是过度捕捞),可能导致企鹅因为食物严重短缺而大规模死亡,期间估计有 6.2 万只企鹅死亡。非洲企鹅在 2024 年被列为极度濒危物种。研究人员称其它地区的企鹅种群数量也出现了大规模锐减。过去 30 年企鹅全球种群数量下降了近 80%。
- Calibre 最新更新加入 AI,不满的用户创建了移除 AI 的分支
知名开源电子书管理程序 Calibre 上周释出了最新更新 v8.16.2,主要变化是加入了 AI 功能:允许询问 AI 有关 Calibre 书库中任何书籍的问题;右键“查看”选择“与 AI 讨论所选书籍”;右键一本书使用“相似书籍”菜单询问 AI 接下来读什么书;加入 LM Studio 后端,允许在本地运行不同 AI 模型。 对 AI 不满的用户随后创建了一个分支 Clbre,主要变化就是移除了 AI 功能。Clbre 的代码托管在正在大力集成 AI 的微软 GitHub 平台上。
- 癌症率激增引发癌症过早发现的争论
自 1992 年以来,美国 50 岁以下人群八种癌症的诊断率翻了一番。美国癌症研究协会(American Association for Cancer Research)表示本周将召开特别会议,讨论年轻人群癌症率上升的问题。部分专家认为亟需找出这一现象背后的原因。还有一部分专家则认为没必要担忧,很多癌症是过早被发现了,本就不会致命。数十年来人们已经知道不是所有癌症都危险。部分癌症会自己消失。部分癌症会停止生长或不构成任何风险——不会引起症状也不会扩散。但问题在于不可能知道一个人的癌症是否致命。哈佛医学院的 H. Gilbert Welch 博士认为,判断癌症诊断人数上升是虚惊一场还是真正危险信号的一种方法是观察死亡人数是否同时上升。如果癌症发病率飙升,但死亡率保持稳定,那么很多患者其实不需要接受诊断。美国八种癌症诊断率上升并没有伴随着死亡人数增加。八种癌症中只有结直肠癌和子宫内膜癌死亡率略有增加,其中子宫内膜癌被认为与肥胖流行相关。耶鲁大学的 Cary Gross 博士认为,癌症诊断率上升可能反应了检测工具如 CT、超声和 MRI 的灵敏度改进和使用频率的增加。
- 加密货币帮助犯罪分子洗钱和逃避制裁
走私者、洗钱者以及面临制裁的人过去通常使用奢侈品如钻石、黄金和艺术品藏匿非法财富。这些奢侈品的转移和消费都不很方便。今天的稳定币让犯罪分子能轻松洗钱和逃避制裁。稳定币是一种与美元挂钩的加密货币。区块链分析公司 Chainalysis 在 2 月发布的一份报告估计,去年涉及稳定币的非法交易额高达 250 亿美元。稳定币的兴起危及到了制裁这一美国最强大的外交政策工具。区块链数据公司 TRM Labs 政策主管 Ari Redbord 表示,当犯罪分子只需点击几下鼠标就能转移数百万美元时,制裁等经济处罚的效力就大打折扣了。美国财政部几十年来一直依赖银行和信用卡公司通过执行合规措施去打击非法金融活动,而稳定币完全绕过了这一系统。
- RMS 谈 ChatGPT
曾在 MIT AI 实验室长期工作的 Richard Stallman(RMS)认为 ChatGPT 没有智能,不应该称之为 AI。他对智能的定义是至少在某个领域知道、理解或掌握相关知识。ChatGPT 既不知道也不理解任何事物,因此它不具有智能。它不知道自己输出的意思,也不知道文字能包容万象。他将 ChatGPT 称之为胡扯生成器,以根本不在乎事实是否属实的方式生成输出。其它生成式 AI 系统都有类似的问题。他说人们不应该相信那些机械地玩弄文字、却不真正理解文字含义的系统。RMS 同时表示 ChatGPT 是私有软件,运行在云端服务器上,因此会危害用户的计算自由。
- 欧盟对 Google AI 展开反垄断调查
欧盟周二宣布对 Google 展开调查。调查将评估 Google 是否在未给予适当补偿的情况下,使用媒体和其他出版机构在网上发布的内容来训练和提供 AI 服务,从而违反反垄断法规。欧盟委员会表示,调查将关注 Google 是否通过向出版商和内容创作者施加不公平条款,或通过为自己提供对这些内容的优先访问权,从而扭曲竞争。欧盟竞争事务主管 Teresa Ribera 表示,一个自由且民主的社会,依赖多元媒体,也依赖开放的信息获取渠道和充满活力的创意环境。她表示,AI 正带来显著的创新,也为整个欧洲的人们和商业带来许多益处。但进步不能以牺牲社会核心原则为代价。
- 睡眠不足与预期寿命减少相关
根据发表在《SLEEP Advances》期刊上的一项研究,睡眠不足与预期寿命减少相关。研究针对的是美国,发现睡眠对预期寿命的影响仅次于吸烟,超过了饮食、运动、孤独感等因素。论文第一作者、OHSU School of Nursing 的副教授 Andrew McHill 博士表示,研究强调了每天有七到九小时充足睡眠时间的重要性。研究没有深入探讨睡眠不足为何会缩短预期寿命,McHill 博士指出,睡眠会影响心血管健康、免疫系统和大脑功能。他说,研究表明,我们应像重视饮食和运动一样重视睡眠。良好的睡眠不仅能改善精神状态,还能延长寿命。
- Google 计划明年推出集成 Gemini 的 AI 眼镜
2010 年代的 Google 眼镜再次复活。Google 官方博客透露它正在研发两款不同类型的 AI 智能眼镜,计划明年推出,与 Meta 的现有产品进行竞争。其中一款配备了屏幕,另一款则专注于音频。它的硬件合作伙伴包括了韩国三星、美国 Warby Parker 以及韩国的 Gentle Monster 等。Google 展示了与中国公司 Xreal 合作开发的代号为 Project Aura 的 AI 眼镜样品。Project Aura 运行 Android XR,需要连接外置电池组才能工作,它提供了 70 度的视场。