Weekly Digest — 2025-W40
124 unique stories (2025-09-29 → 2025-10-05), aggregated across 8 sources.
Hacker News(41)
- FCC Accidentally Leaked iPhone Schematics (www.engadget.com)
- Claude Code 2.0 (www.npmjs.com)
- Claude Sonnet 4.5 (www.anthropic.com)
- Loadmo.re: design inspiration for unconventional web (loadmo.re)
- Write the damn code (antonz.org)
- Meta-analysis of 2.2M people: Loneliness increases mortality risk by 32% (lightcapai.medium.com)
- Inflammation now predicts heart disease more strongly than cholesterol (www.empirical.health)
- Boeing has started working on a 737 MAX replacement (www.wsj.com)
- Extract-0: A specialized language model for document information extraction (arxiv.org)
- Sora 2 (openai.com)
- Leaked Apple M5 9 core Geekbench scores (browser.geekbench.com)
- U.S. Lost 32,000 Private-Sector Jobs in September, Says Payroll Processor (www.wsj.com)
GitHub Trending(27)
- harry0703 / MoneyPrinterTurbo
利用AI大模型,一键生成高清短视频 Generate short videos with one click using AI LLM.
- commaai / openpilot
openpilot is an operating system for robotics. Currently, it upgrades the driver assistance system on 300+ supported cars.
- kamranahmedse / developer-roadmap
Interactive roadmaps, guides and other educational content to help developers grow in their careers.
- Done-0 / fuck-u-code
Legacy-Mess Detector – assess the “legacy-mess level” of your code and output a beautiful report | 屎山代码检测器,评估代码的“屎山等级”并输出美观的报告
- frappe / erpnext
Free and Open Source Enterprise Resource Planning (ERP)
- snarktank / ai-dev-tasks
A simple task management system for managing AI dev agents
- nextcloud / server
☁️ Nextcloud server, a safe home for all your data
- typst / typst
A new markup-based typesetting system that is powerful and easy to learn.
- fastapi / fastapi
FastAPI framework, high performance, easy to learn, fast to code, ready for production
- DevCaress / guia-entrevistas-de-programacion
- anthropics / claude-agent-sdk-python
- lobehub / lobe-chat
🤯 Lobe Chat - an open-source, modern design AI chat framework. Supports multiple AI providers (OpenAI / Claude 4 / Gemini / DeepSeek / Ollama / Qwen), Knowledge Base (file upload / RAG ), one click install MCP Marketplace and Artifacts / Thinking. One-click FREE deployment of your private AI Agent application.
Hugging Face(30)
- LongLive: Real-time Interactive Long Video Generation
We present LongLive, a frame-level autoregressive (AR) framework for real-time and interactive long video generation. Long video generation presents challenges in both efficiency and quality. Diffusion and Diffusion-Forcing models can produce high-quality videos but suffer from low efficiency due to bidirectional attention. Causal attention AR models support KV caching for faster inference, but often degrade in quality on long videos due to memory challenges during long-video training. In addition, beyond static prompt-based generation, interactive capabilities, such as streaming prompt inputs, are critical for dynamic content creation, enabling users to guide narratives in real time. This interactive requirement significantly increases complexity, especially in ensuring visual consistency and semantic coherence during prompt transitions. To address these challenges, LongLive adopts a causal, frame-level AR design that integrates a KV-recache mechanism that refreshes cached states with new prompts for smooth, adherent switches; streaming long tuning to enable long video training and to align training and inference (train-long-test-long); and short window attention paired with a frame-level attention sink, shorten as frame sink, preserving long-range consistency while enabling faster generation. With these key designs, LongLive fine-tunes a 1.3B-parameter short-clip model to minute-long generation in just 32 GPU-days. At inference, LongLive sustains 20.7 FPS on a single NVIDIA H100, achieves strong performance on VBench in both short and long videos. LongLive supports up to 240-second videos on a single H100 GPU. LongLive further supports INT8-quantized inference with only marginal quality loss.
- Quantile Advantage Estimation for Entropy-Safe Reasoning
Reinforcement Learning with Verifiable Rewards (RLVR) strengthens LLM reasoning, but training often oscillates between {entropy collapse} and {entropy explosion}. We trace both hazards to the mean baseline used in value-free RL (e.g., GRPO and DAPO), which improperly penalizes negative-advantage samples under reward outliers. We propose {Quantile Advantage Estimation} (QAE), replacing the mean with a group-wise K-quantile baseline. QAE induces a response-level, two-regime gate: on hard queries (p <= 1 - K) it reinforces rare successes, while on easy queries (p > 1 - K) it targets remaining failures. Under first-order softmax updates, we prove {two-sided entropy safety}, giving lower and upper bounds on one-step entropy change that curb explosion and prevent collapse. Empirically, this minimal modification stabilizes entropy, sparsifies credit assignment (with tuned K, roughly 80% of responses receive zero advantage), and yields sustained pass@1 gains on Qwen3-8B/14B-Base across AIME 2024/2025 and AMC 2023. These results identify {baseline design} -- rather than token-level heuristics -- as the primary mechanism for scaling RLVR.
- MinerU2.5: A Decoupled Vision-Language Model for Efficient High-Resolution Document Parsing
We introduce MinerU2.5, a 1.2B-parameter document parsing vision-language model that achieves state-of-the-art recognition accuracy while maintaining exceptional computational efficiency. Our approach employs a coarse-to-fine, two-stage parsing strategy that decouples global layout analysis from local content recognition. In the first stage, the model performs efficient layout analysis on downsampled images to identify structural elements, circumventing the computational overhead of processing high-resolution inputs. In the second stage, guided by the global layout, it performs targeted content recognition on native-resolution crops extracted from the original image, preserving fine-grained details in dense text, complex formulas, and tables. To support this strategy, we developed a comprehensive data engine that generates diverse, large-scale training corpora for both pretraining and fine-tuning. Ultimately, MinerU2.5 demonstrates strong document parsing ability, achieving state-of-the-art performance on multiple benchmarks, surpassing both general-purpose and domain-specific models across various recognition tasks, while maintaining significantly lower computational overhead.
- EPO: Entropy-regularized Policy Optimization for LLM Agents Reinforcement Learning
Training LLM agents in multi-turn environments with sparse rewards, where completing a single task requires 30+ turns of interaction within an episode, presents a fundamental challenge for reinforcement learning. We identify a critical failure mode unique to this setting: the exploration-exploitation cascade failure. This cascade begins with early-stage policy premature convergence, where sparse feedback causes agents to commit to flawed, low-entropy strategies. Subsequently, agents enter late-stage policy collapse, where conventional entropy regularization becomes counterproductive, promoting chaotic exploration that destabilizes training. We propose Entropy-regularized Policy Optimization (EPO), a general framework that breaks this failure cycle through three synergistic mechanisms: (1) adopting entropy regularization in multi-turn settings to enhance exploration, (2) an entropy smoothing regularizer that bounds policy entropy within historical averages to prevent abrupt fluctuations, and (3) adaptive phase-based weighting that balances exploration and exploitation across training. Our analysis justifies that EPO guarantees monotonically decreasing entropy variance while maintaining convergence. EPO achieves up to 152% performance improvement on ScienceWorld and up to 19.8% on ALFWorld. Our work demonstrates that multi-turn sparse-reward settings require fundamentally different entropy control than traditional RL, with broad implications for LLM agent training.
- ReviewScore: Misinformed Peer Review Detection with Large Language Models
Peer review serves as a backbone of academic research, but in most AI conferences, the review quality is degrading as the number of submissions explodes. To reliably detect low-quality reviews, we define misinformed review points as either "weaknesses" in a review that contain incorrect premises, or "questions" in a review that can be already answered by the paper. We verify that 15.2% of weaknesses and 26.4% of questions are misinformed and introduce ReviewScore indicating if a review point is misinformed. To evaluate the factuality of each premise of weaknesses, we propose an automated engine that reconstructs every explicit and implicit premise from a weakness. We build a human expert-annotated ReviewScore dataset to check the ability of LLMs to automate ReviewScore evaluation. Then, we measure human-model agreements on ReviewScore using eight current state-of-the-art LLMs and verify moderate agreements. We also prove that evaluating premise-level factuality shows significantly higher agreements than evaluating weakness-level factuality. A thorough disagreement analysis further supports a potential of fully automated ReviewScore evaluation.
- Variational Reasoning for Language Models
We introduce a variational reasoning framework for language models that treats thinking traces as latent variables and optimizes them through variational inference. Starting from the evidence lower bound (ELBO), we extend it to a multi-trace objective for tighter bounds and propose a forward-KL formulation that stabilizes the training of the variational posterior. We further show that rejection sampling finetuning and binary-reward RL, including GRPO, can be interpreted as local forward-KL objectives, where an implicit weighting by model accuracy naturally arises from the derivation and reveals a previously unnoticed bias toward easier questions. We empirically validate our method on the Qwen 2.5 and Qwen 3 model families across a wide range of reasoning tasks. Overall, our work provides a principled probabilistic perspective that unifies variational inference with RL-style methods and yields stable objectives for improving the reasoning ability of language models. Our code is available at https://github.com/sail-sg/variational-reasoning.
- StableToken: A Noise-Robust Semantic Speech Tokenizer for Resilient SpeechLLMs
Prevalent semantic speech tokenizers, designed to capture linguistic content, are surprisingly fragile. We find they are not robust to meaning-irrelevant acoustic perturbations; even at high Signal-to-Noise Ratios (SNRs) where speech is perfectly intelligible, their output token sequences can change drastically, increasing the learning burden for downstream LLMs. This instability stems from two flaws: a brittle single-path quantization architecture and a distant training signal indifferent to intermediate token stability. To address this, we introduce StableToken, a tokenizer that achieves stability through a consensus-driven mechanism. Its multi-branch architecture processes audio in parallel, and these representations are merged via a powerful bit-wise voting mechanism to form a single, stable token sequence. StableToken sets a new state-of-the-art in token stability, drastically reducing Unit Edit Distance (UED) under diverse noise conditions. This foundational stability translates directly to downstream benefits, significantly improving the robustness of SpeechLLMs on a variety of tasks.
- Beyond the Exploration-Exploitation Trade-off: A Hidden State Approach for LLM Reasoning in RLVR
A prevailing view in Reinforcement Learning for Verifiable Rewards (RLVR) interprets recent progress through the lens of an exploration-exploitation trade-off, a perspective largely shaped by token-level metrics. We re-examine this perspective, proposing that this perceived trade-off may not be a fundamental constraint but rather an artifact of the measurement level. To investigate this, we shift the analysis to the semantically rich hidden-state space, adopting Effective Rank (ER) to quantify exploration and proposing its novel first- and second-order derivatives, named Effective Rank Velocity (ERV) and Effective Rank Acceleration (ERA), to capture exploitation dynamics. Our analysis reveals that at the hidden-state level, exploration and exploitation could be decoupled (Sec. 4). This finding reveals an opportunity to enhance both capacities simultaneously. This insight motivates our method, Velocity-Exploiting Rank-Learning (VERL), the first to operationalize the principle of synergistic exploration-exploitation enhancement by directly shaping the RL advantage function. The key innovation is leveraging the theoretically stable ERA as a predictive meta-controller to create a synergistic, dual-channel incentive structure. Instead of forcing a trade-off, VERL prospectively amplifies rewards for exploration to preempt overconfidence and reinforces exploitative gains to consolidate reasoning. Experiments across diverse LLMs and reasoning benchmarks show consistent gains, including up to 21.4% absolute accuracy improvement on the challenging Gaokao 2024 dataset.
- When Does Reasoning Matter? A Controlled Study of Reasoning's Contribution to Model Performance
Large Language Models (LLMs) with reasoning capabilities have achieved state-of-the-art performance on a wide range of tasks. Despite its empirical success, the tasks and model scales at which reasoning becomes effective, as well as its training and inference costs, remain underexplored. In this work, we rely on a synthetic data distillation framework to conduct a large-scale supervised study. We compare Instruction Fine-Tuning (IFT) and reasoning models of varying sizes, on a wide range of math-centric and general-purpose tasks, evaluating both multiple-choice and open-ended formats. Our analysis reveals that reasoning consistently improves model performance, often matching or surpassing significantly larger IFT systems. Notably, while IFT remains Pareto-optimal in training and inference costs, reasoning models become increasingly valuable as model size scales, overcoming IFT performance limits on reasoning-intensive and open-ended tasks.
- GSM8K-V: Can Vision Language Models Solve Grade School Math Word Problems in Visual Contexts
Vision language models (VLMs) achieve unified modeling of images and text, enabling them to accomplish complex real-world tasks through perception, planning, and reasoning. Among these tasks, reasoning is particularly representative, with mathematical reasoning serving as a prominent example. It highlights the high-level capability of VLMs to comprehend mathematical information in images and to perform sophisticated reasoning. Recently, numerous visual mathematical reasoning benchmarks have been proposed, but they are often restricted to geometry, lack coverage of math word problems, and rarely assess reasoning across multiple images. To address these gaps, we introduce GSM8K-V, a purely visual multi-image mathematical reasoning benchmark. GSM8K-V is built by systematically mapping each sample from the widely used text-based GSM8K into visual form. Through a carefully designed automated image-generation pipeline combined with meticulous human annotation, we curate 1,319 high-quality samples. We evaluate a wide range of open-source and closed-source models on GSM8K-V. Results show that although existing VLMs have nearly saturated performance on text-based GSM8K, there remains substantial room for improvement on GSM8K-V. For example, the best-performing model, Gemini-2.5-Pro, achieves 95.22% accuracy on GSM8K but only 46.93% on GSM8K-V. We conduct a comprehensive analysis of GSM8K-V, examining the limitations of current models as well as potential directions for improvement. GSM8K-V offers a new perspective on visual mathematical reasoning and establishes a benchmark to guide the development of more robust and generalizable VLMs.
- Towards Personalized Deep Research: Benchmarks and Evaluations
Deep Research Agents (DRAs) can autonomously conduct complex investigations and generate comprehensive reports, demonstrating strong real-world potential. However, existing evaluations mostly rely on close-ended benchmarks, while open-ended deep research benchmarks remain scarce and typically neglect personalized scenarios. To bridge this gap, we introduce Personalized Deep Research Bench, the first benchmark for evaluating personalization in DRAs. It pairs 50 diverse research tasks across 10 domains with 25 authentic user profiles that combine structured persona attributes with dynamic real-world contexts, yielding 250 realistic user-task queries. To assess system performance, we propose the PQR Evaluation Framework, which jointly measures (P) Personalization Alignment, (Q) Content Quality, and (R) Factual Reliability. Our experiments on a range of systems highlight current capabilities and limitations in handling personalized deep research. This work establishes a rigorous foundation for developing and evaluating the next generation of truly personalized AI research assistants.
- Random Policy Valuation is Enough for LLM Reasoning with Verifiable Rewards
RL with Verifiable Rewards (RLVR) has emerged as a promising paradigm for improving the reasoning abilities of large language models (LLMs). Current methods rely primarily on policy optimization frameworks like PPO and GRPO, which follow generalized policy iteration that alternates between evaluating the current policy's value and improving the policy based on evaluation. While effective, they often suffer from training instability and diversity collapse, requiring complex heuristic tricks and careful tuning. We observe that standard RLVR in math reasoning can be formalized as a specialized finite-horizon Markov Decision Process with deterministic state transitions, tree-structured dynamics, and binary terminal rewards. Though large in scale, the underlying structure is simpler than general-purpose control settings for which popular RL algorithms (e.g., PPO) were developed, suggesting that several sophisticated techniques in existing methods may be reduced or even omitted. Based on this insight, we prove a surprising result: the optimal action can be recovered from the Q-function of a fixed uniformly random policy, thereby bypassing the generalized policy iteration loop and its associated heuristics. We introduce Random Policy Valuation for Diverse Reasoning (ROVER) to translate this principle into a practical and scalable algorithm for LLM math reasoning, a minimalist yet highly effective RL method that samples actions from a softmax over these uniform-policy Q-values. ROVER preserves diversity throughout training, allowing sustained exploration of multiple valid pathways. Across multiple base models and standard math reasoning benchmarks, ROVER demonstrates superior performance in both quality (+8.2 on pass@1, +16.8 on pass@256) and diversity (+17.6\%), despite its radical simplification compared to strong, complicated existing methods.
Solidot(26)
- 流浪行星发现有极光
天文学家利用韦伯太空望远镜观测一颗在宇宙中自由漂流的行星 SIMP-0136,意外发现在高层大气不时出现极光,而且行星大气循环由这些极光加热所驱动。漂流行星 SIMP-0136 距离地球约 20 光年,质量约为木星的 12.7 倍、半径约为木星的 1.2 倍。由于此行星自转一周只需约 2.4 小时,让天文学家得以快速观察大气层的完整变化。结果发现大气的垂直温度分布出现「温度反转」现象,也就是高度愈低,气温愈低,越往高空则气温越高,与地球等行星的大气温度垂直分布完全不同。这种异常主要源于极光不断将能量注入并加热高层大气所致。 这颗行星的云层并非由水或冰构成,而是矽酸盐颗粒组成,类似地球沙滩上的沙子。整颗行星几乎被云层平均覆盖,与地球云系经常出现云缝或空隙的情况大不相同。它的平均气温超过摄氏一千五百度,远比木星或土星平均气温约在零下百度炙热得多。研究显示,极光不只出现于地球或木星,也能在孤单的漂流行星上扮演塑造大气结构与提供动力来源的关键角色。
- 瑞士周日公投以微弱多数批准电子身份证
瑞士周日公投以微弱多数批准电子身份证。这是瑞士电子身份证计划的第二次全民公决。第一次是在 2021 年,当时由于选民担心数据的隐私保护问题,以及该系统主要由私营企业运营而投了反对票。政府之后修改了计划,新的电子身份证将由政府运营,而且是可选的,且限制了数据的访问——举例来说,需要访问年龄的机构将只能访问到年龄信息。用户可选择将电子身份证数据与手机捆绑,如果更换手机将需要重新申请一张电子身份证。在周日的公投中,50.4% 的选民支持电子身份证,49.6% 的选民反对。投票率 49.55%。
- F-Droid 发表声明反对 Google 验证应用开发者身份的要求
上个月 Google 以安全的名义宣布将验证所有 Android 应用开发者的身份,从明年开始,Google 将屏蔽未经身份验证的开发者的 Android 应用的侧载(sideload)。开源自由软件 Android 应用商店 F-Droid 发表声明反对 Google 的决定。F-Droid 认为,如果这一政策强制推行,包括它在内的第三方应用商店将面临终结。Google 声称是为了安全,但过去几年它的官方应用商店 Google Play 被发现托管了大量恶意程序。它要求验证应用开发者的身份不是为了安全而是为了巩固权力,加强对曾经开放的生态系统的控制。Google 正在构建一个限制竞争和用户自由的阻塞点(choke point)。F-Droid 呼吁对此问题关心的用户向自己所在地区的议员递交反对意见,向欧盟 DMA 请愿,捍卫应用的自由分发。
- Linux 6.17 释出
Linus Torvalds 在内核邮件列表上宣布释出 Linux 6.17,Linux 6.18 合并窗口开启。Linux 6.17 的主要新特性包括:更好的控制针对 x86 CPU Spectre 漏洞的缓解措施;64 位 Arm 平台的实时补丁(live patching)支持;改进 pidfd;移除对单处理器系统的特殊支持;初步支持代理执行;file_getattr() 和 file_setattr()系统调用;Btrfs 文件系统的大页(large folio)实验性支持;支持 DualPI2 拥塞控制协议,等等。更多可浏览 kernelnewbies 网页。
- 百万年前的龙人化石或改写人类家谱
中科院等单位的研究人员,在对一件距今约 100 万年前的古人类头骨化石经重新分析后,不仅揭示出一个与神秘古人类“丹尼索瓦人”密切相关的新演化支系——“龙人”(Homo longi),更将现代人、尼安德特人与这一亚洲古人类支系的分化时间大幅推前,远超此前学界共识。研究发表于《科学》。研究人员对 1990 年发现于湖北省郧县的“郧县人2号”头骨化石进行了高精度 CT 扫描与结构光表面扫描,进行数字重建。研究结果显示,重建后的郧县人2号头骨脑容量超过了 1100 毫升,并且呈现出原始与进步特征交融的形态:低平的额骨和突出的吻部类似更古老的直立人或海德堡人;而扁平低矮的颧骨、更宽的后脑颅以及较大的脑容量,则与龙人以及大荔、金牛山、华龙洞、许家窑等地出土的中更新世人类化石相似。研究发现,智人、龙人和尼安德特人这三支的分化发生得非常早,虽然早于目前化石记录所示,但与基因组数据推测的结果高度吻合。进一步研究结果显示,郧县人并非直立人,而是与丹尼索瓦人密切相关的龙人支系的早期代表,表明早在 100 万年前,人类祖先已经分化成多个独立演化的群体。
- 美国考虑要求芯片公司的芯片国内制造和进口各占一半
美国正考虑制定一项规定,要求芯片公司在国内制造和从国外进口的芯片各占一半,否则进口的芯片将需要缴纳关税。此举旨在促进半导体制造回流美国,在美国国内制造芯片的企业将获得关税豁免。但如果企业不能长期维持国内制造和国外进口 1:1 比例,它们将需要缴关税。美国商务部长 Howard Lutnick 向半导体行业高管提出了这一想法,告诉他们这是出于经济安全的需要。根据提案,承诺在美国国内制造芯片的企业将获得承诺产量的信用,允许在工厂竣工前免关税进口芯片。
- 阿富汗断网超过一天
根据 Netblocks 的监测数据,阿富汗断网已超过 24 小时。互联网和移动电话服务全部中断,全国居民生活在通讯几乎完全中断的状况下。阿富汗的全面断网始于周一晚上,进入周二后互联网和电话服务继续中断。首都喀布尔一名 42 岁的店主 Najibullah 说,"没有电话和互联网我们都是盲人,所有业务都依赖于手机。送货是用手机。这一情况就像是假期:每个人都在家里。市场完全冻结。"这是塔利班政府首次切断全国的通信,官方没有对此做出解释。法新社在断网前曾收到一名政府官员的警告,称有八到九千个通信支柱(telecommunications pillars)将被关闭,通信中断将持续到另行通知为止。目前阿富汗有限的通信只能依靠无线电和少数卫星链路。
- Linus Torvalds 从 Linux 6.18 中完全移除了 Bcachefs
在 Linux 6.17 将 Bcachefs 文件系统列为由外部维护并且没有合并任何 Bcachefs 维护者 Kent Overstreet 递交的拉取请求之后,Linus Torvalds 在 Linux 6.18 中完全移除了 Bcachefs,总共删除了 11.7 万行代码。Torvalds 评论说,Bcachefs 现在是一个 DKMS 模块,内核代码过时了,删除内核中的代码以避免版本混淆。
- 世界最高大桥花江峡谷大桥通车
贵州花江峡谷大桥正式通车。大桥桥面距水面625米,高度超过北盘江第一桥近 60 米,成为新的世界第一高桥;大桥主桥跨径 1420 米,居山区桥梁跨径世界第一。大桥全长 2890 米,可将两岸通行时间从两个多小时缩短到两分钟左右。从 2022 年开工到正式通车,这座“超级工程”的建造只花了三年多。花江峡谷大桥钢桁梁吊装有 93 个节段,总重达 2.1 万吨,需在 600 多米高空实现毫米级精准对接。建设团队借助研发的“智慧缆索吊装系统”,全部吊装仅用了 73 天就全面完成;3.8 万平方米的桥面,建设团队在 1 个多月里完成了 5 层铺装。
- CS 教授警告毕业生难找到工作
以研究数字取证和深度伪造而知名的加州伯克利计算机科学教授 Hany Farid 表示,计算机科学在极短时间内从经得住时间考验的职业变成了剧变中的行业。他说,计算机科学专业的学生通常会在前四年获得五份实习机会,毕业时会收到多份高薪的工作机会。但如今这种情况不会发生了,如果能收到一份工作邀约他们就很高兴了。Farid 教授认为 AI 只是因素之一。计算机科学行业正在发生某种变化。他现在给学生的建议是掌握多种技能,因为不知道未来会发生什么。他说,AI 不会让律师失业,但会用 AI 的律师会让不会用 AI 的律师失业。他认为每个职业都如此。
- 因 AI 需求大涨 DRAM 价格翻倍
美国调查公司 Omdia 的数据显示,10~12 月服务器用 DRAM 的预测价格为 4.3 美元/GB,比 2023 年 10~12 月高出 2.4 美元。PC 用产品的预测价格为 2.8 美元,比 2023 年 10~12 月上涨 1.2 美元。这一趋势背后的原因是 AI 服务器的需求猛增。AI 服务器主要使用 HBM 内存。主要 DRAM 内存芯片制造商三星电子、SK 海力士及美光缩小产量或停产了上一代的 DDR4,转为生产和销售 HBM。AI 服务器的内存需求正在推动整个半导体市场。美国半导体行业协会(SIA)公布的全球半导体销售额7月达到了 620.7 亿美元,同比增长 20.6%,首次突破 600 亿美元。已连续 21 个月超过去年同期。
- 微塑料可能削弱骨骼
根据发表在《Osteoporosis International》上的一篇综述,研究人员分析了 62 项研究,发现微塑料会破坏骨髓干细胞,刺激破骨细胞——一种削弱骨组织的细胞,从而削弱骨骼。实验室实验发现,微塑料颗粒会降低细胞活性,诱导细胞过早衰老,改变基因表达,引发炎症反应。动物研究发现,微塑料的积累会降低白细胞数量,破坏骨骼微结构,导致细胞结构不规则,增加骨折风险。巴西 Campinas 州立大学的 Rodrigo Bueno de Oliveira 表示,这些影响会阻碍实验动物的骨骼生长。