社区所有版块导航
Python
python开源   Django   Python   DjangoApp   pycharm  
DATA
docker   Elasticsearch  
aigc
aigc   chatgpt  
WEB开发
linux   MongoDB   Redis   DATABASE   NGINX   其他Web框架   web工具   zookeeper   tornado   NoSql   Bootstrap   js   peewee   Git   bottle   IE   MQ   Jquery  
机器学习
机器学习算法  
Python88.com
反馈   公告   社区推广  
产品
短视频  
印度
印度  
Py学习  »  机器学习算法

机器学习学术速递[7.14]

arXiv每日学术速递 • 1 月前 • 153 次点击  

点击阅读原文访问arxivdaily.com,涵盖CS|物理|数学|经济|统计|金融|生物|电气领域,更有搜索、收藏等功能!


cs.LG 方向,今日共计107篇


大模型相关(12篇)

【1】One Token to Fool LLM-as-a-Judge
标题:一个代币来愚弄法学硕士作为法官
链接:https://arxiv.org/abs/2507.08794

作者:o, Haolin Liu, Dian Yu, S.Y. Kung, Haitao Mi, Dong Yu
摘要:Generative reward models (also known as LLMs-as-judges), which use large language models (LLMs) to evaluate answer quality, are increasingly adopted in reinforcement learning with verifiable rewards (RLVR). They are often preferred over rigid rule-based metrics, especially for complex reasoning tasks involving free-form outputs. In this paradigm, an LLM is typically prompted to compare a candidate answer against a ground-truth reference and assign a binary reward indicating correctness. Despite the seeming simplicity of this comparison task, we find that generative reward models exhibit surprising vulnerabilities to superficial manipulations: non-word symbols (e.g., ":" or ".") or reasoning openers like "Thought process:" and "Let's solve this problem step by step." can often lead to false positive rewards. We demonstrate that this weakness is widespread across LLMs, datasets, and prompt formats, posing a serious threat for core algorithmic paradigms that rely on generative reward models, such as rejection sampling, preference optimization, and RLVR. To mitigate this issue, we introduce a simple yet effective data augmentation strategy and train a new generative reward model with substantially improved robustness. Our findings highlight the urgent need for more reliable LLM-based evaluation methods. We release our robust, general-domain reward model and its synthetic training data at https://huggingface.co/sarosavo/Master-RM and https://huggingface.co/datasets/sarosavo/Master-RM.


【2】AgentsNet: Coordination and Collaborative Reasoning in Multi-Agent LLMs
标题:AgentsNet:多Agent LLM中的协调和协作推理
链接:https://arxiv.org/abs/2507.08616

作者:rötschla, Luis Müller, Jan Tönshoff, Mikhail Galkin, Bryan Perozzi
备注:Preprint
摘要:Large-language models (LLMs) have demonstrated powerful problem-solving capabilities, in particular when organized in multi-agent systems. However, the advent of such systems also raises several questions on the ability of a complex network of agents to effectively self-organize and collaborate. While measuring performance on standard reasoning benchmarks indicates how well multi-agent systems can solve reasoning tasks, it is unclear whether these systems are able to leverage their topology effectively. Here, we propose AgentsNet, a new benchmark for multi-agent reasoning. By drawing inspiration from classical problems in distributed systems and graph theory, AgentsNet measures the ability of multi-agent systems to collaboratively form strategies for problem-solving, self-organization, and effective communication given a network topology. We evaluate a variety of baseline methods on AgentsNet including homogeneous networks of agents which first have to agree on basic protocols for organization and communication. We find that some frontier LLMs are already demonstrating strong performance for small networks but begin to fall off once the size of the network scales. While existing multi-agent benchmarks cover at most 2-5 agents, AgentsNet is practically unlimited in size and can scale with new generations of LLMs. As such, we also probe frontier models in a setup with up to 100 agents.


【3】Efficient Deployment of Vision-Language Models on Mobile Devices: A Case Study on OnePlus 13R
标题:在移动设备上高效部署视觉语言模型:OnePlus 13 R的案例研究
链接:https://arxiv.org/abs/2507.08505

作者:in Guerrero, Yueyang Pan, Sanidhya Kashyap
摘要:Vision-Language Models (VLMs) offer promising capabilities for mobile devices, but their deployment faces significant challenges due to computational limitations and energy inefficiency, especially for real-time applications. This study provides a comprehensive survey of deployment frameworks for VLMs on mobile devices, evaluating llama.cpp, MLC-Imp, and mllm in the context of running LLaVA-1.5 7B, MobileVLM-3B, and Imp-v1.5 3B as representative workloads on a OnePlus 13R. Each deployment framework was evaluated on the OnePlus 13R while running VLMs, with measurements covering CPU, GPU, and NPU utilization, temperature, inference time, power consumption, and user experience. Benchmarking revealed critical performance bottlenecks across frameworks: CPU resources were consistently over-utilized during token generation, while GPU and NPU accelerators were largely unused. When the GPU was used, primarily for image feature extraction, it was saturated, leading to degraded device responsiveness. The study contributes framework-level benchmarks, practical profiling tools, and an in-depth analysis of hardware utilization bottlenecks, highlighting the consistent overuse of CPUs and the ineffective or unstable use of GPUs and NPUs in current deployment frameworks.


【4】Pre-Training LLMs on a budget: A comparison of three optimizers
标题:预算内预训练LLM:三种优化器的比较
链接:https://arxiv.org/abs/2507.08472

作者:otthauer, Christian Kroos, Chris Hinze, Viktor Hangya, Luzian Hahn, Fabian Küch
摘要 :Optimizers play a decisive role in reducing pre-training times for LLMs and achieving better-performing models. In this study, we compare three major variants: the de-facto standard AdamW, the simpler Lion, developed through an evolutionary search, and the second-order optimizer Sophia. For better generalization, we train with two different base architectures and use a single- and a multiple-epoch approach while keeping the number of tokens constant. Using the Maximal Update Parametrization and smaller proxy models, we tune relevant hyperparameters separately for each combination of base architecture and optimizer. We found that while the results from all three optimizers were in approximately the same range, Sophia exhibited the lowest training and validation loss, Lion was fastest in terms of training GPU hours but AdamW led to the best downstream evaluation results.


【5】Inference-Time Scaling of Diffusion Language Models with Particle Gibbs Sampling
标题:基于粒子Gibbs抽样的扩散语言模型的推理时间缩放
链接:https://arxiv.org/abs/2507.08390

作者:ng, Jiaqi Han, Minkai Xu, Kai Xu, Akash Srivastava, Stefano Ermon
摘要:Discrete diffusion models have emerged as a powerful paradigm for language modeling, rivaling auto-regressive models by training-time scaling. However, inference-time scaling in discrete diffusion models remains relatively under-explored. In this work, we study sampling-based approaches for achieving high-quality text generation from discrete diffusion models in reward-guided settings. We introduce a novel inference-time scaling approach based on particle Gibbs sampling for discrete diffusion models. The particle Gibbs sampling algorithm iteratively refines full diffusion trajectories using conditional Sequential Monte Carlo as its transition mechanism. This process ensures that the updated samples progressively improve and move closer to the reward-weighted target distribution. Unlike existing inference-time scaling methods, which are often limited to single diffusion trajectories, our approach leverages iterative refinement across multiple trajectories. Within this framework, we further analyze the trade-offs between four key axes for inference-time scaling under fixed compute budgets: particle Gibbs iterations, particle count, denoising steps, and reward estimation cost. Empirically, our method consistently outperforms prior inference-time strategies on reward-guided text generation tasks, achieving significant improvement in accuracy under varying compute budgets.


【6】Leveraging Machine Learning and Enhanced Parallelism Detection for BPMN Model Generation from Text
标题:利用机器学习和增强的并行性检测从文本生成BCBN模型
链接:https://arxiv.org/abs/2507.08362

作者:m Lê, Charlotte Schneider-Depré, Alexandre Goossens, Alexander Stevens, Aurélie Leribaux, Johannes De Smedt
摘要:Efficient planning, resource management, and consistent operations often rely on converting textual process documents into formal Business Process Model and Notation (BPMN) models. However, this conversion process remains time-intensive and costly. Existing approaches, whether rule-based or machine-learning-based, still struggle with writing styles and often fail to identify parallel structures in process descriptions.   This paper introduces an automated pipeline for extracting BPMN models from text, leveraging the use of machine learning and large language models. A key contribution of this work is the introduction of a newly annotated dataset, which significantly enhances the training process. Specifically, we augment the PET dataset with 15 newly annotated documents containing 32 parallel gateways for model training, a critical feature often overlooked in existing datasets. This addition enables models to better capture parallel structures, a common but complex aspect of process descriptions. The proposed approach demonstrates adequate performance in terms of reconstruction accuracy, offering a promising foundation for organizations to accelerate BPMN model creation.


【7】A Practical Two-Stage Recipe for Mathematical LLMs: Maximizing Accuracy with SFT and Efficiency with Reinforcement Learning
标题:数学LLM的实用两阶段食谱:利用SFT最大化准确性,利用强化学习最大化效率
链接:https://arxiv.org/abs/2507.08267

作者:oshihara, Taiki Yamaguchi, Yuichi Inoue
备注:Presented at ICML 2025 Workshop on The second AI for MATH
摘要:Enhancing the mathematical reasoning of Large Language Models (LLMs) is a pivotal challenge in advancing AI capabilities. While Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) are the dominant training paradigms, a systematic methodology for combining them to maximize both accuracy and efficiency remains largely unexplored. This paper introduces a practical and effective training recipe that strategically integrates extended SFT with RL from online inference (GRPO). We posit that these methods play complementary, not competing, roles: a prolonged SFT phase first pushes the model's accuracy to its limits, after which a GRPO phase dramatically improves token efficiency while preserving this peak performance. Our experiments reveal that extending SFT for as many as 10 epochs is crucial for performance breakthroughs, and that the primary role of GRPO in this framework is to optimize solution length. The efficacy of our recipe is rigorously validated through top-tier performance on challenging benchmarks, including a high rank among over 2,200 teams in the strictly leak-free AI Mathematical Olympiad (AIMO). This work provides the community with a battle-tested blueprint for developing state-of-the-art mathematical reasoners that are both exceptionally accurate and practically efficient. To ensure full reproducibility and empower future research, we will open-source our entire framework, including all code, model checkpoints, and training configurations at https://github.com/analokmaus/kaggle-aimo2-fast-math-r1.


【8】Quantum-Accelerated Neural Imputation with Large Language Models (LLMs)
标题:使用大型语言模型(LLM)的量子加速神经插补
链接:https://arxiv.org/abs/2507.08255

作者:amali
摘要 :Missing data presents a critical challenge in real-world datasets, significantly degrading the performance of machine learning models. While Large Language Models (LLMs) have recently demonstrated remarkable capabilities in tabular data imputation, exemplified by frameworks like UnIMP, their reliance on classical embedding methods often limits their ability to capture complex, non-linear correlations, particularly in mixed-type data scenarios encompassing numerical, categorical, and textual features. This paper introduces Quantum-UnIMP, a novel framework that integrates shallow quantum circuits into an LLM-based imputation architecture. Our core innovation lies in replacing conventional classical input embeddings with quantum feature maps generated by an Instantaneous Quantum Polynomial (IQP) circuit. This approach enables the model to leverage quantum phenomena such as superposition and entanglement, thereby learning richer, more expressive representations of data and enhancing the recovery of intricate missingness patterns. Our experiments on benchmark mixed-type datasets demonstrate that Quantum-UnIMP reduces imputation error by up to 15.2% for numerical features (RMSE) and improves classification accuracy by 8.7% for categorical features (F1-Score) compared to state-of-the-art classical and LLM-based methods. These compelling results underscore the profound potential of quantum-enhanced representations for complex data imputation tasks, even with near-term quantum hardware.


【9】InsightBuild: LLM-Powered Causal Reasoning in Smart Building Systems
标题:InsightBuild:智能建筑系统中LLM支持的因果推理
链接:https://arxiv.org/abs/2507.08235

作者:asad Guha Neogi, Ahmad Mohammadshirazi, Rajiv Ramnath
摘要:Smart buildings generate vast streams of sensor and control data, but facility managers often lack clear explanations for anomalous energy usage. We propose InsightBuild, a two-stage framework that integrates causality analysis with a fine-tuned large language model (LLM) to provide human-readable, causal explanations of energy consumption patterns. First, a lightweight causal inference module applies Granger causality tests and structural causal discovery on building telemetry (e.g., temperature, HVAC settings, occupancy) drawn from Google Smart Buildings and Berkeley Office datasets. Next, an LLM, fine-tuned on aligned pairs of sensor-level causes and textual explanations, receives as input the detected causal relations and generates concise, actionable explanations. We evaluate InsightBuild on two real-world datasets (Google: 2017-2022; Berkeley: 2018-2020), using expert-annotated ground-truth causes for a held-out set of anomalies. Our results demonstrate that combining explicit causal discovery with LLM-based natural language generation yields clear, precise explanations that assist facility managers in diagnosing and mitigating energy inefficiencies.


【10】Integrating External Tools with Large Language Models to Improve Accuracy
标题:将外部工具与大型语言模型集成以提高准确性
链接:https://arxiv.org/abs/2507.08034

作者:iketan, Hadj Batatia
备注:9 pages, 3 figures, 2 tables. Extended version of paper published in   Proceedings of International Conference on Information Technology and   Applications, Springer Nature Singapore, 2025, pp. 409-421. This version   includes additional experimental results comparing against GPT-4o,   LLaMA-Large, Mistral-Large, and Phi-Large, expanded evaluation methodology,   and enhanced analysis
摘要:This paper deals with improving querying large language models (LLMs). It is well-known that without relevant contextual information, LLMs can provide poor quality responses or tend to hallucinate. Several initiatives have proposed integrating LLMs with external tools to provide them with up-to-date data to improve accuracy. In this paper, we propose a framework to integrate external tools to enhance the capabilities of LLMs in answering queries in educational settings. Precisely, we develop a framework that allows accessing external APIs to request additional relevant information. Integrated tools can also provide computational capabilities such as calculators or calendars. The proposed framework has been evaluated using datasets from the Multi-Modal Language Understanding (MMLU) collection. The data consists of questions on mathematical and scientific reasoning. Results compared to state-of-the-art language models show that the proposed approach significantly improves performance. Our Athena framework achieves 83% accuracy in mathematical reasoning and 88% in scientific reasoning, substantially outperforming all tested models including GPT-4o, LLaMA-Large, Mistral-Large, Phi-Large, and GPT-3.5, with the best baseline model (LLaMA-Large) achieving only 67% and 79% respectively. These promising results open the way to creating complex computing ecosystems around LLMs to make their use more natural to support various tasks and activities.


【11】Review, Remask, Refine (R3): Process-Guided Block Diffusion for Text Generation
标题:评论、重新掩蔽、细化(R3):文本生成的过程引导块扩散
链接:https://arxiv.org/abs/2507.08018

作者:unier, Parsa Idehpour
备注:Accepted at Methods and Opportunities at Small Scale (MOSS), ICML 2025
摘要:A key challenge for iterative text generation is enabling models to efficiently identify and correct their own errors. We propose Review, Remask, Refine (R3), a relatively simple yet elegant framework that requires no additional model training and can be applied to any pre-trained masked text diffusion model (e.g., LLaDA or BD3-LM). In R3, a Process Reward Model (PRM) is utilized for the Review of intermediate generated blocks. The framework then translates these PRM scores into a Remask strategy: the lower a block's PRM score, indicating potential mistakes, the greater the proportion of tokens within that block are remasked. Finally, the model is compelled to Refine these targeted segments, focusing its efforts more intensively on specific sub-optimal parts of past generations, leading to improved final output.


【12】MedicalBERT: enhancing biomedical natural language processing using pretrained BERT-based model
标题:MedicalBERT:使用基于BERT的预训练模型增强生物医学自然语言处理
链接:https://arxiv.org/abs/2507.08013

作者:Reddy, N. Ragavenderan, Vasanth K., Ganesh N. Naik, Vishalakshi Prabhu, Nagaraja G. S
摘要 :Recent advances in natural language processing (NLP) have been driven bypretrained language models like BERT, RoBERTa, T5, and GPT. Thesemodels excel at understanding complex texts, but biomedical literature, withits domain-specific terminology, poses challenges that models likeWord2Vec and bidirectional long short-term memory (Bi-LSTM) can't fullyaddress. GPT and T5, despite capturing context, fall short in tasks needingbidirectional understanding, unlike BERT. Addressing this, we proposedMedicalBERT, a pretrained BERT model trained on a large biomedicaldataset and equipped with domain-specific vocabulary that enhances thecomprehension of biomedical terminology. MedicalBERT model is furtheroptimized and fine-tuned to address diverse tasks, including named entityrecognition, relation extraction, question answering, sentence similarity, anddocument classification. Performance metrics such as the F1-score,accuracy, and Pearson correlation are employed to showcase the efficiencyof our model in comparison to other BERT-based models such as BioBERT,SciBERT, and ClinicalBERT. MedicalBERT outperforms these models onmost of the benchmarks, and surpasses the general-purpose BERT model by5.67% on average across all the tasks evaluated respectively. This work alsounderscores the potential of leveraging pretrained BERT models for medicalNLP tasks, demonstrating the effectiveness of transfer learning techniques incapturing domain-specific information.   (PDF) MedicalBERT: enhancing biomedical natural language processing using pretrained BERT-based model. Available from: https://www.researchgate.net/publication/392489050_MedicalBERT_enhancing_biomedical_natural_language_processing_using_pretrained_BERT-based_model [accessed Jul 06 2025].


Graph相关(图学习|图神经网络|图优化等)(3篇)

【1】KGRAG-Ex: Explainable Retrieval-Augmented Generation with Knowledge Graph-based Perturbations
标题:KGRAG-Ex:基于知识图的扰动的可解释检索增强生成
链接:https://arxiv.org/abs/2507.08443

作者:Balanos, Evangelos Chasanis, Konstantinos Skianis, Evaggelia Pitoura
摘要:Retrieval-Augmented Generation (RAG) enhances language models by grounding responses in external information, yet explainability remains a critical challenge, particularly when retrieval relies on unstructured text. Knowledge graphs (KGs) offer a solution by introducing structured, semantically rich representations of entities and their relationships, enabling transparent retrieval paths and interpretable reasoning. In this work, we present KGRAG-Ex, a RAG system that improves both factual grounding and explainability by leveraging a domain-specific KG constructed via prompt-based information extraction. Given a user query, KGRAG-Ex identifies relevant entities and semantic paths in the graph, which are then transformed into pseudo-paragraphs: natural language representations of graph substructures that guide corpus retrieval. To improve interpretability and support reasoning transparency, we incorporate perturbation-based explanation methods that assess the influence of specific KG-derived components on the generated answers. We conduct a series of experiments to analyze the sensitivity of the system to different perturbation methods, the relationship between graph component importance and their structural positions, the influence of semantic node types, and how graph metrics correspond to the influence of components within the explanations process.


【2】EvA: Evolutionary Attacks on Graphs
标题:EvA:对图的进化攻击
链接:https://arxiv.org/abs/2507.08212

作者:Sadegh Akhondzadeh, Soroush H. Zargarbashi, Jimin Cao, Aleksandar Bojchevski
备注:23 pages, 12 figures
摘要:Even a slight perturbation in the graph structure can cause a significant drop in the accuracy of graph neural networks (GNNs). Most existing attacks leverage gradient information to perturb edges. This relaxes the attack's optimization problem from a discrete to a continuous space, resulting in solutions far from optimal. It also restricts the adaptability of the attack to non-differentiable objectives. Instead, we introduce a few simple yet effective enhancements of an evolutionary-based algorithm to solve the discrete optimization problem directly. Our Evolutionary Attack (EvA) works with any black-box model and objective, eliminating the need for a differentiable proxy loss. This allows us to design two novel attacks that reduce the effectiveness of robustness certificates and break conformal sets. The memory complexity of our attack is linear in the attack budget. Among our experiments, EvA shows $\sim$11\% additional drop in accuracy on average compared to the best previous attack, revealing significant untapped potential in designing attacks.


【3】EP-GAT: Energy-based Parallel Graph Attention Neural Network for Stock Trend Classification
标题:EP-GAT:用于股票趋势分类的基于能量的并行图注意力神经网络
链接:https://arxiv.org/abs/2507.08184

作者:Jiang, Pengju Zhang, Peter Martin
备注:Accepted by IJCNN 2025, oral presentation
摘要:Graph neural networks have shown remarkable performance in forecasting stock movements, which arises from learning complex inter-dependencies between stocks and intra-dynamics of stocks. Existing approaches based on graph neural networks typically rely on static or manually defined factors to model changing inter-dependencies between stocks. Furthermore, these works often struggle to preserve hierarchical features within stocks. To bridge these gaps, this work presents the Energy-based Parallel Graph Attention Neural Network, a novel approach for predicting future movements for multiple stocks. First, it generates a dynamic stock graph with the energy difference between stocks and Boltzmann distribution, capturing evolving inter-dependencies between stocks. Then, a parallel graph attention mechanism is proposed to preserve the hierarchical intra-stock dynamics. Extensive experiments on five real-world datasets are conducted to validate the proposed approach, spanning from the US stock markets (NASDAQ, NYSE, SP) and UK stock markets (FTSE, LSE). The experimental results demonstrate that EP-GAT consistently outperforms competitive five baselines on test periods across various metrics. The ablation studies and hyperparameter sensitivity analysis further validate the effectiveness of each module in the proposed method.


Transformer(2篇)

【1】Prediction of Lane Change Intentions of Human Drivers using an LSTM, a CNN and a Transformer
标题:使用LSTM、CNN和Transformer预测人类驾驶员的车道变换意图
链接:https://arxiv.org/abs/2507.08365

作者: De Cristofaro, Felix Hofbaur, Aixi Yang, Arno Eichberger
备注:14 pages, 18 figures
摘要 :Lane changes of preceding vehicles have a great impact on the motion planning of automated vehicles especially in complex traffic situations. Predicting them would benefit the public in terms of safety and efficiency. While many research efforts have been made in this direction, few concentrated on predicting maneuvers within a set time interval compared to predicting at a set prediction time. In addition, there exist a lack of comparisons between different architectures to try to determine the best performing one and to assess how to correctly choose the input for such models. In this paper the structure of an LSTM, a CNN and a Transformer network are described and implemented to predict the intention of human drivers to perform a lane change. We show how the data was prepared starting from a publicly available dataset (highD), which features were used, how the networks were designed and finally we compare the results of the three networks with different configurations of input data. We found that transformer networks performed better than the other networks and was less affected by overfitting. The accuracy of the method spanned from $82.79\%$ to $96.73\%$ for different input configurations and showed overall good performances considering also precision and recall.


【2】SPINT: Spatial Permutation-Invariant Neural Transformer for Consistent Intracortical Motor Decoding
标题:SPint:空间置换不变神经Transformer,用于一致的皮质内电机解码
链接:https://arxiv.org/abs/2507.08402

作者: Hao Fang, Jingyuan Li, Tung Nguyen, Lu Mi, Amy Orsborn, Uygar Sümbül, Eli Shlizerman
摘要:Intracortical Brain-Computer Interfaces (iBCI) aim to decode behavior from neural population activity, enabling individuals with motor impairments to regain motor functions and communication abilities. A key challenge in long-term iBCI is the nonstationarity of neural recordings, where the composition and tuning profiles of the recorded populations are unstable across recording sessions. Existing methods attempt to address this issue by explicit alignment techniques; however, they rely on fixed neural identities and require test-time labels or parameter updates, limiting their generalization across sessions and imposing additional computational burden during deployment. In this work, we introduce SPINT - a Spatial Permutation-Invariant Neural Transformer framework for behavioral decoding that operates directly on unordered sets of neural units. Central to our approach is a novel context-dependent positional embedding scheme that dynamically infers unit-specific identities, enabling flexible generalization across recording sessions. SPINT supports inference on variable-size populations and allows few-shot, gradient-free adaptation using a small amount of unlabeled data from the test session. To further promote model robustness to population variability, we introduce dynamic channel dropout, a regularization method for iBCI that simulates shifts in population composition during training. We evaluate SPINT on three multi-session datasets from the FALCON Benchmark, covering continuous motor decoding tasks in human and non-human primates. SPINT demonstrates robust cross-session generalization, outperforming existing zero-shot and few-shot unsupervised baselines while eliminating the need for test-time alignment and fine-tuning. Our work contributes an initial step toward a robust and scalable neural decoding framework for long-term iBCI applications.


GAN|对抗|攻击|生成相关(6篇)

【1】SPLASH! Sample-efficient Preference-based inverse reinforcement learning for Long-horizon Adversarial tasks from Suboptimal Hierarchical demonstrations
标题:飞溅!针对次优分层演示中的长视野对抗任务的样本高效的基于偏好的反向强化学习
链接:https://arxiv.org/abs/2507.08707

作者:wley, Zachary Serlin, Tyler Paine, Makai Mann, Michael Benjamin, Calin Belta
摘要:Inverse Reinforcement Learning (IRL) presents a powerful paradigm for learning complex robotic tasks from human demonstrations. However, most approaches make the assumption that expert demonstrations are available, which is often not the case. Those that allow for suboptimality in the demonstrations are not designed for long-horizon goals or adversarial tasks. Many desirable robot capabilities fall into one or both of these categories, thus highlighting a critical shortcoming in the ability of IRL to produce field-ready robotic agents. We introduce Sample-efficient Preference-based inverse reinforcement learning for Long-horizon Adversarial tasks from Suboptimal Hierarchical demonstrations (SPLASH), which advances the state-of-the-art in learning from suboptimal demonstrations to long-horizon and adversarial settings. We empirically validate SPLASH on a maritime capture-the-flag task in simulation, and demonstrate real-world applicability with sim-to-real translation experiments on autonomous unmanned surface vehicles. We show that our proposed methods allow SPLASH to significantly outperform the state-of-the-art in reward learning from suboptimal demonstrations.


【2】Lightweight Safety Guardrails via Synthetic Data and RL-guided Adversarial Training
标题:通过合成数据和RL引导的对抗训练的轻量级安全护栏
链接:https://arxiv.org/abs/2507.08284

作者:lin, Gor Matevosyan, Xueying Ma, Vladimir Eremin, Suhaa Dada, Muqun Li, Riyaaz Shaik, Haluk Noyan Tokgozoglu
摘要:We introduce a lightweight yet highly effective safety guardrail framework for language models, demonstrating that small-scale language models can achieve, and even surpass, the performance of larger counterparts in content moderation tasks. This is accomplished through high-fidelity synthetic data generation and adversarial training. The synthetic data generation process begins with human-curated seed data, which undergoes query augmentation and paraphrasing to create diverse and contextually rich examples. This augmented data is then subjected to multiple rounds of curation, ensuring high fidelity and relevance. Inspired by recent advances in the Generative Adversarial Network (GAN) architecture, our adversarial training employs reinforcement learning to guide a generator that produces challenging synthetic examples. These examples are used to fine-tune the safety classifier, enhancing its ability to detect and mitigate harmful content. Additionally, we incorporate strategies from recent research on efficient LLM training, leveraging the capabilities of smaller models to improve the performance of larger generative models. With iterative adversarial training and the generation of diverse, high-quality synthetic data, our framework enables small language models (SLMs) to serve as robust safety guardrails. This approach not only reduces computational overhead but also enhances resilience against adversarial attacks, offering a scalable and efficient solution for content moderation in AI systems.


【3】Data-Driven Dimensional Synthesis of Diverse Planar Four-bar Function Generation Mechanisms via Direct Parameterization
标题:基于数据驱动的平面四杆函数发生机构尺度综合
链接:https://arxiv.org/abs/2507.08269

作者:g Kim, Jaeheun Jung, Jeong Un Ha, Donghun Lee, Jae Kyung Shim
摘要 :Dimensional synthesis of planar four-bar mechanisms is a challenging inverse problem in kinematics, requiring the determination of mechanism dimensions from desired motion specifications. We propose a data-driven framework that bypasses traditional equation-solving and optimization by leveraging supervised learning. Our method combines a synthetic dataset, an LSTM-based neural network for handling sequential precision points, and a Mixture of Experts (MoE) architecture tailored to different linkage types. Each expert model is trained on type-specific data and guided by a type-specifying layer, enabling both single-type and multi-type synthesis. A novel simulation metric evaluates prediction quality by comparing desired and generated motions. Experiments show our approach produces accurate, defect-free linkages across various configurations. This enables intuitive and efficient mechanism design, even for non-expert users, and opens new possibilities for scalable and flexible synthesis in kinematic design.


【4】Data Generation without Function Estimation
标题:无需函数估计的数据生成
链接:https://arxiv.org/abs/2507.08239

作者:shmand, Ashkan Soleymani
摘要:Estimating the score function (or other population-density-dependent functions) is a fundamental component of most generative models. However, such function estimation is computationally and statistically challenging. Can we avoid function estimation for data generation? We propose an estimation-free generative method: A set of points whose locations are deterministically updated with (inverse) gradient descent can transport a uniform distribution to arbitrary data distribution, in the mean field regime, without function estimation, training neural networks, and even noise injection. The proposed method is built upon recent advances in the physics of interacting particles. We show, both theoretically and experimentally, that these advances can be leveraged to develop novel generative methods.


【5】Admissibility of Stein Shrinkage for Batch Normalization in the Presence of Adversarial Attacks
标题:对抗攻击下Stein收缩对批量归一化的可容许性
链接:https://arxiv.org/abs/2507.08261

作者:lgina, P. Thomas Fletcher, Baba C. Vemuri
摘要:Batch normalization (BN) is a ubiquitous operation in deep neural networks used primarily to achieve stability and regularization during network training. BN involves feature map centering and scaling using sample means and variances, respectively. Since these statistics are being estimated across the feature maps within a batch, this problem is ideally suited for the application of Stein's shrinkage estimation, which leads to a better, in the mean-squared-error sense, estimate of the mean and variance of the batch. In this paper, we prove that the Stein shrinkage estimator for the mean and variance dominates over the sample mean and variance estimators in the presence of adversarial attacks when modeling these attacks using sub-Gaussian distributions. This facilitates and justifies the application of Stein shrinkage to estimate the mean and variance parameters in BN and use it in image classification (segmentation) tasks with and without adversarial attacks. We present SOTA performance results using this Stein corrected batch norm in a standard ResNet architecture applied to the task of image classification using CIFAR-10 data, 3D CNN on PPMI (neuroimaging) data and image segmentation using HRNet on Cityscape data with and without adversarial attacks.


【6】Unraveling the Potential of Diffusion Models in Small Molecule Generation
标题:揭开扩散模型在小分子生成中的潜力
链接:https://arxiv.org/abs/2507.08005

作者:hang, Daniel Baker, Minghu Song, Jinbo Bi
摘要:Generative AI presents chemists with novel ideas for drug design and facilitates the exploration of vast chemical spaces. Diffusion models (DMs), an emerging tool, have recently attracted great attention in drug R\&D. This paper comprehensively reviews the latest advancements and applications of DMs in molecular generation. It begins by introducing the theoretical principles of DMs. Subsequently, it categorizes various DM-based molecular generation methods according to their mathematical and chemical applications. The review further examines the performance of these models on benchmark datasets, with a particular focus on comparing the generation performance of existing 3D methods. Finally, it concludes by emphasizing current challenges and suggesting future research directions to fully exploit the potential of DMs in drug discovery.


半/弱/无/有监督|不确定性|主动学习(3篇)

【1】Towards Efficient Quantity Retrieval from Text:an Approach via Description Parsing and Weak Supervision
标题:从文本中实现高效的数量检索:一种通过描述解析和弱监督的方法
链接:https://arxiv.org/abs/2507.08322

作者:o, Zhengrong Chen, Chengxuan Xia, Kun Wu, Ping Luo
备注:Extended version of the paper accepted in DEXA 2025
摘要:Quantitative facts are continually generated by companies and governments, supporting data-driven decision-making. While common facts are structured, many long-tail quantitative facts remain buried in unstructured documents, making them difficult to access. We propose the task of Quantity Retrieval: given a description of a quantitative fact, the system returns the relevant value and supporting evidence. Understanding quantity semantics in context is essential for this task. We introduce a framework based on description parsing that converts text into structured (description, quantity) pairs for effective retrieval. To improve learning, we construct a large paraphrase dataset using weak supervision based on quantity co-occurrence. We evaluate our approach on a large corpus of financial annual reports and a newly annotated quantity description dataset. Our method significantly improves top-1 retrieval accuracy from 30.98 percent to 64.66 percent.


【2】Self-Supervised Learning-Based Multimodal Prediction on Prosocial Behavior Intentions
标题:基于自我监督学习的亲社会行为意图多模式预测
链接:https://arxiv.org/abs/2507.08238

作者:ddy Naini, Zhaobo K. Zheng, Teruhisa Misu, Kumar Akash
备注:5 pages, 4 figures, published at ICASSP 2025
摘要 :Human state detection and behavior prediction have seen significant advancements with the rise of machine learning and multimodal sensing technologies. However, predicting prosocial behavior intentions in mobility scenarios, such as helping others on the road, is an underexplored area. Current research faces a major limitation. There are no large, labeled datasets available for prosocial behavior, and small-scale datasets make it difficult to train deep-learning models effectively. To overcome this, we propose a self-supervised learning approach that harnesses multi-modal data from existing physiological and behavioral datasets. By pre-training our model on diverse tasks and fine-tuning it with a smaller, manually labeled prosocial behavior dataset, we significantly enhance its performance. This method addresses the data scarcity issue, providing a more effective benchmark for prosocial behavior prediction, and offering valuable insights for improving intelligent vehicle systems and human-machine interaction.


【3】Robust Semi-Supervised CT Radiomics for Lung Cancer Prognosis: Cost-Effective Learning with Limited Labels and SHAP Interpretation
标题:用于肺癌预后的稳健半监督CT放射组学:具有成本效益的学习,标签有限和SHAP解释
链接:https://arxiv.org/abs/2507.08189

作者:R. Salmanpour, Amir Hossein Pouria, Sonia Falahati, Shahram Taeb, Somayeh Sadat Mehrnia, Ali Fathi Jouzdani, Mehrdad Oveisi, Ilker Hacihaliloglu, Arman Rahmim
备注:12 pages, 4 figures
摘要:Background: CT imaging is vital for lung cancer management, offering detailed visualization for AI-based prognosis. However, supervised learning SL models require large labeled datasets, limiting their real-world application in settings with scarce annotations.   Methods: We analyzed CT scans from 977 patients across 12 datasets extracting 1218 radiomics features using Laplacian of Gaussian and wavelet filters via PyRadiomics Dimensionality reduction was applied with 56 feature selection and extraction algorithms and 27 classifiers were benchmarked A semi supervised learning SSL framework with pseudo labeling utilized 478 unlabeled and 499 labeled cases Model sensitivity was tested in three scenarios varying labeled data in SL increasing unlabeled data in SSL and scaling both from 10 percent to 100 percent SHAP analysis was used to interpret predictions Cross validation and external testing in two cohorts were performed.   Results: SSL outperformed SL, improving overall survival prediction by up to 17 percent. The top SSL model, Random Forest plus XGBoost classifier, achieved 0.90 accuracy in cross-validation and 0.88 externally. SHAP analysis revealed enhanced feature discriminability in both SSL and SL, especially for Class 1 survival greater than 4 years. SSL showed strong performance with only 10 percent labeled data, with more stable results compared to SL and lower variance across external testing, highlighting SSL's robustness and cost effectiveness.   Conclusion: We introduced a cost-effective, stable, and interpretable SSL framework for CT-based survival prediction in lung cancer, improving performance, generalizability, and clinical readiness by integrating SHAP explainability and leveraging unlabeled data.


迁移|Zero/Few/One-Shot|自适应(7篇)

【1】Adaptive Nonlinear Vector Autoregression: Robust Forecasting for Noisy Chaotic Time Series
标题:自适应非线性载体自回归:有噪混乱时间序列的鲁棒预测
链接:https://arxiv.org/abs/2507.08738

作者:erkhon, Susana Lopez-Moreno, Eric Dolores-Cuenca, Sieun Lee, Sangil Kim
备注:15 pages, 10 figures
摘要:Nonlinear vector autoregression (NVAR) and reservoir computing (RC) have shown promise in forecasting chaotic dynamical systems, such as the Lorenz-63 model and El Nino-Southern Oscillation. However, their reliance on fixed nonlinearities - polynomial expansions in NVAR or random feature maps in RC - limits their adaptability to high noise or real-world data. These methods also scale poorly in high-dimensional settings due to costly matrix inversion during readout computation. We propose an adaptive NVAR model that combines delay-embedded linear inputs with features generated by a shallow, learnable multi-layer perceptron (MLP). The MLP and linear readout are jointly trained using gradient-based optimization, enabling the model to learn data-driven nonlinearities while preserving a simple readout structure. Unlike standard NVAR, our approach avoids the need for an exhaustive and sensitive grid search over ridge and delay parameters. Instead, tuning is restricted to neural network hyperparameters, improving scalability. Initial experiments on chaotic systems tested under noise-free and synthetically noisy conditions showed that the adaptive model outperformed the standard NVAR in predictive accuracy and showed robust forecasting under noisy conditions with a lower observation frequency.


【2】Monitoring Risks in Test-Time Adaptation
标题:测试时间适应中的风险监控
链接:https://arxiv.org/abs/2507.08721

作者:rmer, Metod Jazbec, Christian A. Naesseth, Eric Nalisnick
摘要:Encountering shifted data at test time is a ubiquitous challenge when deploying predictive models. Test-time adaptation (TTA) methods address this issue by continuously adapting a deployed model using only unlabeled test data. While TTA can extend the model's lifespan, it is only a temporary solution. Eventually the model might degrade to the point that it must be taken offline and retrained. To detect such points of ultimate failure, we propose pairing TTA with risk monitoring frameworks that track predictive performance and raise alerts when predefined performance criteria are violated. Specifically, we extend existing monitoring tools based on sequential testing with confidence sequences to accommodate scenarios in which the model is updated at test time and no test labels are available to estimate the performance metrics of interest. Our extensions unlock the application of rigorous statistical risk monitoring to TTA, and we demonstrate the effectiveness of our proposed TTA monitoring framework across a representative set of datasets, distribution shift types, and TTA methods.


【3】A Comprehensively Adaptive Architectural Optimization-Ingrained Quantum Neural Network Model for Cloud Workloads Prediction
标题:用于云工作负载预测的全面自适应架构优化嵌入量子神经网络模型
链接:https://arxiv.org/abs/2507.08317

作者:Kumar, Deepika Saxena, Kishu Gupta, Satyam Kumar, Ashutosh Kumar Singh
摘要 :Accurate workload prediction and advanced resource reservation are indispensably crucial for managing dynamic cloud services. Traditional neural networks and deep learning models frequently encounter challenges with diverse, high-dimensional workloads, especially during sudden resource demand changes, leading to inefficiencies. This issue arises from their limited optimization during training, relying only on parametric (inter-connection weights) adjustments using conventional algorithms. To address this issue, this work proposes a novel Comprehensively Adaptive Architectural Optimization-based Variable Quantum Neural Network (CA-QNN), which combines the efficiency of quantum computing with complete structural and qubit vector parametric learning. The model converts workload data into qubits, processed through qubit neurons with Controlled NOT-gated activation functions for intuitive pattern recognition. In addition, a comprehensive architecture optimization algorithm for networks is introduced to facilitate the learning and propagation of the structure and parametric values in variable-sized QNNs. This algorithm incorporates quantum adaptive modulation and size-adaptive recombination during training process. The performance of CA-QNN model is thoroughly investigated against seven state-of-the-art methods across four benchmark datasets of heterogeneous cloud workloads. The proposed model demonstrates superior prediction accuracy, reducing prediction errors by up to 93.40% and 91.27% compared to existing deep learning and QNN-based approaches.


【4】Transfer Learning and Mixup for Fine-Grained Few-Shot Fungi Classification
标题:细粒度少粒真菌分类的迁移学习和混合
链接:https://arxiv.org/abs/2507.08248

作者:ei Tam, Murilo Gustineli, Anthony Miyaguchi
摘要:Accurate identification of fungi species presents a unique challenge in computer vision due to fine-grained inter-species variation and high intra-species variation. This paper presents our approach for the FungiCLEF 2025 competition, which focuses on few-shot fine-grained visual categorization (FGVC) using the FungiTastic Few-Shot dataset. Our team (DS@GT) experimented with multiple vision transformer models, data augmentation, weighted sampling, and incorporating textual information. We also explored generative AI models for zero-shot classification using structured prompting but found them to significantly underperform relative to vision-based models. Our final model outperformed both competition baselines and highlighted the effectiveness of domain specific pretraining and balanced sampling strategies. Our approach ranked 35/74 on the private test set in post-completion evaluation, this suggests additional work can be done on metadata selection and domain-adapted multi-modal learning. Our code is available at https://github.com/dsgt-arc/fungiclef-2025.


【5】Adaptive Diffusion Denoised Smoothing : Certified Robustness via Randomized Smoothing with Differentially Private Guided Denoising Diffusion
标题:自适应扩散去噪平滑:通过具有差分私有引导去噪扩散的随机平滑来证明鲁棒性
链接:https://arxiv.org/abs/2507.08163

作者: Shpilevskiy, Saiyue Lyu, Krishnamurthy Dj Dvijotham, Mathias Lécuyer, Pierre-André Noël
摘要:We propose Adaptive Diffusion Denoised Smoothing, a method for certifying the predictions of a vision model against adversarial examples, while adapting to the input. Our key insight is to reinterpret a guided denoising diffusion model as a long sequence of adaptive Gaussian Differentially Private (GDP) mechanisms refining a pure noise sample into an image. We show that these adaptive mechanisms can be composed through a GDP privacy filter to analyze the end-to-end robustness of the guided denoising process, yielding a provable certification that extends the adaptive randomized smoothing analysis. We demonstrate that our design, under a specific guiding strategy, can improve both certified accuracy and standard accuracy on ImageNet for an $\ell_2$ threat model.


【6】ALCo-FM: Adaptive Long-Context Foundation Model for Accident Prediction
标题:ALCo-FM:事故预测的自适应长上下文基础模型
链接:https://arxiv.org/abs/2507.08153

作者:asad Guha Neogi, Ahmad Mohammadshirazi, Rajiv Ramnath
摘要:Traffic accidents are rare, yet high-impact events that require long-context multimodal reasoning for accurate risk forecasting. In this paper, we introduce ALCo-FM, a unified adaptive long-context foundation model that computes a volatility pre-score to dynamically select context windows for input data and encodes and fuses these multimodal data via shallow cross attention. Following a local GAT layer and a BigBird-style sparse global transformer over H3 hexagonal grids, coupled with Monte Carlo dropout for confidence, the model yields superior, well-calibrated predictions. Trained on data from 15 US cities with a class-weighted loss to counter label imbalance, and fine-tuned with minimal data on held-out cities, ALCo-FM achieves 0.94 accuracy, 0.92 F1, and an ECE of 0.04, outperforming more than 20 state-of-the-art baselines in large-scale urban risk prediction. Code and dataset are available at: https://github.com/PinakiPrasad12/ALCo-FM


【7】An Enhanced Privacy-preserving Federated Few-shot Learning Framework for Respiratory Disease Diagnosis
标题:用于呼吸道疾病诊断的增强隐私保护联邦Few-Shot学习框架
链接:https://arxiv.org/abs/2507.08050

作者:, Zhaoyang Duan, Dong Xue, Fangzhou Liu, Zhongheng Zhang
摘要:The labor-intensive nature of medical data annotation presents a significant challenge for respiratory disease diagnosis, resulting in a scarcity of high-quality labeled datasets in resource-constrained settings. Moreover, patient privacy concerns complicate the direct sharing of local medical data across institutions, and existing centralized data-driven approaches, which rely on amounts of available data, often compromise data privacy. This study proposes a federated few-shot learning framework with privacy-preserving mechanisms to address the issues of limited labeled data and privacy protection in diagnosing respiratory diseases. In particular, a meta-stochastic gradient descent algorithm is proposed to mitigate the overfitting problem that arises from insufficient data when employing traditional gradient descent methods for neural network training. Furthermore, to ensure data privacy against gradient leakage, differential privacy noise from a standard Gaussian distribution is integrated into the gradients during the training of private models with local data, thereby preventing the reconstruction of medical images. Given the impracticality of centralizing respiratory disease data dispersed across various medical institutions, a weighted average algorithm is employed to aggregate local diagnostic models from different clients, enhancing the adaptability of a model across diverse scenarios. Experimental results show that the proposed method yields compelling results with the implementation of differential privacy, while effectively diagnosing respiratory diseases using data from different structures, categories, and distributions.


强化学习(5篇)

【1】Optimistic Exploration for Risk-Averse Constrained Reinforcement Learning
标题:风险厌恶约束强化学习的乐观探索
链接:https://arxiv.org/abs/2507.08793

作者:arthy, Radu Marinescu, Elizabeth Daly, Ivana Dusparic
摘要:Risk-averse Constrained Reinforcement Learning (RaCRL) aims to learn policies that minimise the likelihood of rare and catastrophic constraint violations caused by an environment's inherent randomness. In general, risk-aversion leads to conservative exploration of the environment which typically results in converging to sub-optimal policies that fail to adequately maximise reward or, in some cases, fail to achieve the goal. In this paper, we propose an exploration-based approach for RaCRL called Optimistic Risk-averse Actor Critic (ORAC), which constructs an exploratory policy by maximising a local upper confidence bound of the state-action reward value function whilst minimising a local lower confidence bound of the risk-averse state-action cost value function. Specifically, at each step, the weighting assigned to the cost value is increased or decreased if it exceeds or falls below the safety constraint value. This way the policy is encouraged to explore uncertain regions of the environment to discover high reward states whilst still satisfying the safety constraints. Our experimental results demonstrate that the ORAC approach prevents convergence to sub-optimal policies and improves significantly the reward-cost trade-off in various continuous control tasks such as Safety-Gymnasium and a complex building energy management environment CityLearn.


【2】Penalizing Infeasible Actions and Reward Scaling in Reinforcement Learning with Offline Data
标题:离线数据强化学习中惩罚不可行的动作和奖励缩放
链接:https://arxiv.org/abs/2507.08761

作者:Kim, Yongjae Shin, Whiyoung Jung, Sunghoon Hong, Deunsol Yoon, Youngchul Sung, Kanghoon Lee, Woohyung Lim
备注:Accepted to ICML2025
摘要:Reinforcement learning with offline data suffers from Q-value extrapolation errors. To address this issue, we first demonstrate that linear extrapolation of the Q-function beyond the data range is particularly problematic. To mitigate this, we propose guiding the gradual decrease of Q-values outside the data range, which is achieved through reward scaling with layer normalization (RS-LN) and a penalization mechanism for infeasible actions (PA). By combining RS-LN and PA, we develop a new algorithm called PARS. We evaluate PARS across a range of tasks, demonstrating superior performance compared to state-of-the-art algorithms in both offline training and online fine-tuning on the D4RL benchmark, with notable success in the challenging AntMaze Ultra task.


【3】SAM2RL: Towards Reinforcement Learning Memory Control in Segment Anything Model 2
标题:SAM2RL:Segment Anything模型2中的强化学习记忆控制
链接:https://arxiv.org/abs/2507.08548

作者:yan, Tomáš Čížek, Matej Straka, Klara Janouskova, Martin Schmid
摘要:Segment Anything Model 2 (SAM 2) has demonstrated strong performance in object segmentation tasks and has become the state-of-the-art for visual object tracking. The model stores information from previous frames in a memory bank, enabling temporal consistency across video sequences. Recent methods augment SAM 2 with hand-crafted update rules to better handle distractors, occlusions, and object motion. We propose a fundamentally different approach using reinforcement learning for optimizing memory updates in SAM 2 by framing memory control as a sequential decision-making problem. In an overfitting setup with a separate agent per video, our method achieves a relative improvement over SAM 2 that exceeds by more than three times the gains of existing heuristics. These results reveal the untapped potential of the memory bank and highlight reinforcement learning as a powerful alternative to hand-crafted update rules for memory control in visual object tracking.


【4】Online Pre-Training for Offline-to-Online Reinforcement Learning
标题:离线到在线强化学习的在线预训练
链接:https://arxiv.org/abs/2507.08387

作者:hin, Jeonghye Kim, Whiyoung Jung, Sunghoon Hong, Deunsol Yoon, Youngsoo Jang, Geonhyeong Kim, Jongseong Chae, Youngchul Sung, Kanghoon Lee, Woohyung Lim
备注:ICML 2025 camera-ready
摘要:Offline-to-online reinforcement learning (RL) aims to integrate the complementary strengths of offline and online RL by pre-training an agent offline and subsequently fine-tuning it through online interactions. However, recent studies reveal that offline pre-trained agents often underperform during online fine-tuning due to inaccurate value estimation caused by distribution shift, with random initialization proving more effective in certain cases. In this work, we propose a novel method, Online Pre-Training for Offline-to-Online RL (OPT), explicitly designed to address the issue of inaccurate value estimation in offline pre-trained agents. OPT introduces a new learning phase, Online Pre-Training, which allows the training of a new value function tailored specifically for effective online fine-tuning. Implementation of OPT on TD3 and SPOT demonstrates an average 30% improvement in performance across a wide range of D4RL environments, including MuJoCo, Antmaze, and Adroit.


【5】Safe Deep Reinforcement Learning for Resource Allocation with Peak Age of Information Violation Guarantees
标题:安全的深度强化学习用于资源分配,并保证信息违规高峰期
链接:https://arxiv.org/abs/2507.08653

作者:nes Reyhan, Sinem Coleri
备注:15 Pages, to be published in IEEE Transactions on Communications
摘要 :In Wireless Networked Control Systems (WNCSs), control and communication systems must be co-designed due to their strong interdependence. This paper presents a novel optimization theory-based safe deep reinforcement learning (DRL) framework for ultra-reliable WNCSs, ensuring constraint satisfaction while optimizing performance, for the first time in the literature. The approach minimizes power consumption under key constraints, including Peak Age of Information (PAoI) violation probability, transmit power, and schedulability in the finite blocklength regime. PAoI violation probability is uniquely derived by combining stochastic maximum allowable transfer interval (MATI) and maximum allowable packet delay (MAD) constraints in a multi-sensor network. The framework consists of two stages: optimization theory and safe DRL. The first stage derives optimality conditions to establish mathematical relationships among variables, simplifying and decomposing the problem. The second stage employs a safe DRL model where a teacher-student framework guides the DRL agent (student). The control mechanism (teacher) evaluates compliance with system constraints and suggests the nearest feasible action when needed. Extensive simulations show that the proposed framework outperforms rule-based and other optimization theory based DRL benchmarks, achieving faster convergence, higher rewards, and greater stability.


符号|符号学习(1篇)

【1】ML-Based Automata Simplification for Symbolic Accelerators
标题:符号加速器的基于ML的自动机简化
链接:https://arxiv.org/abs/2507.08751

作者:u, Rye Stahle-Smith, Darssan Eswaramoorthi, Rasha Karakchi
摘要:Symbolic accelerators are increasingly used for symbolic data processing in domains such as genomics, NLP, and cybersecurity. However, these accelerators face scalability issues due to excessive memory use and routing complexity, especially when targeting a large set. We present AutoSlim, a machine learning-based graph simplification framework designed to reduce the complexity of symbolic accelerators built on Non-deterministic Finite Automata (NFA) deployed on FPGA-based overlays such as NAPOLY+. AutoSlim uses Random Forest classification to prune low-impact transitions based on edge scores and structural features, significantly reducing automata graph density while preserving semantic correctness. Unlike prior tools, AutoSlim targets automated score-aware simplification with weighted transitions, enabling efficient ranking-based sequence analysis. We evaluated data sets (1K to 64K nodes) in NAPOLY+ and conducted performance measurements including latency, throughput, and resource usage. AutoSlim achieves up to 40 percent reduction in FPGA LUTs and over 30 percent pruning in transitions, while scaling to graphs an order of magnitude larger than existing benchmarks. Our results also demonstrate how hardware interconnection (fanout) heavily influences hardware cost and that AutoSlim's pruning mitigates resource blowup.


医学相关(2篇)

【1】Interpretability-Aware Pruning for Efficient Medical Image Analysis
标题:可解释性感知修剪以实现高效的医学图像分析
链接:https://arxiv.org/abs/2507.08330

作者:lik, Pratinav Seth, Neeraj Kumar Singh, Chintan Chitroda, Vinay Kumar Sankarapu
备注:Pre-Print
摘要:Deep learning has driven significant advances in medical image analysis, yet its adoption in clinical practice remains constrained by the large size and lack of transparency in modern models. Advances in interpretability techniques such as DL-Backtrace, Layer-wise Relevance Propagation, and Integrated Gradients make it possible to assess the contribution of individual components within neural networks trained on medical imaging tasks. In this work, we introduce an interpretability-guided pruning framework that reduces model complexity while preserving both predictive performance and transparency. By selectively retaining only the most relevant parts of each layer, our method enables targeted compression that maintains clinically meaningful representations. Experiments across multiple medical image classification benchmarks demonstrate that this approach achieves high compression rates with minimal loss in accuracy, paving the way for lightweight, interpretable models suited for real-world deployment in healthcare settings.


【2】Raptor: Scalable Train-Free Embeddings for 3D Medical Volumes Leveraging Pretrained 2D Foundation Models
标题:Raptor:3D医疗数据库的可扩展免训练嵌入式利用预训练的2D基础模型
链接:https://arxiv.org/abs/2507.08254

作者: Moonseong Jeong, Simon A. Lee, Aditya Gorla, Yuzhe Yang, Sriram Sankararaman
备注:21 pages, 10 figures, accepted to ICML 2025. The first two authors   contributed equally
摘要:Current challenges in developing foundational models for volumetric imaging data, such as magnetic resonance imaging (MRI), stem from the computational complexity of training state-of-the-art architectures in high dimensions and curating sufficiently large datasets of volumes. To address these challenges, we introduce Raptor (Random Planar Tensor Reduction), a train-free method for generating semantically rich embeddings for volumetric data. Raptor leverages a frozen 2D foundation model, pretrained on natural images, to extract visual tokens from individual cross-sections of medical volumes. These tokens are then spatially compressed using random projections, significantly reducing computational complexity while retaining semantic information. Extensive experiments on ten diverse medical volume tasks verify the superior performance of Raptor over state-of-the-art methods, including those pretrained exclusively on medical volumes (+3% SuPreM, +6% MISFM, +10% Merlin, +13% VoCo, and +14% SLIViT), while entirely bypassing the need for costly training. Our results highlight the effectiveness and versatility of Raptor as a foundation for advancing deep learning-based methods for medical volumes.


蒸馏|知识提取(3篇)

【1】A Hybrid Multi-Well Hopfield-CNN with Feature Extraction and K-Means for MNIST Classification
标题:基于特征提取和K-means的混合多井Hopfield-CNN MNIST分类
链接:https://arxiv.org/abs/2507.08766

作者:ooq
摘要 :This study presents a hybrid model for classifying handwritten digits in the MNIST dataset, combining convolutional neural networks (CNNs) with a multi-well Hopfield network. The approach employs a CNN to extract high-dimensional features from input images, which are then clustered into class-specific prototypes using k-means clustering. These prototypes serve as attractors in a multi-well energy landscape, where a Hopfield network performs classification by minimizing an energy function that balances feature similarity and class assignment.The model's design enables robust handling of intraclass variability, such as diverse handwriting styles, while providing an interpretable framework through its energy-based decision process. Through systematic optimization of the CNN architecture and the number of wells, the model achieves a high test accuracy of 99.2% on 10,000 MNIST images, demonstrating its effectiveness for image classification tasks. The findings highlight the critical role of deep feature extraction and sufficient prototype coverage in achieving high performance, with potential for broader applications in pattern recognition.


【2】Forget Me Not: Fighting Local Overfitting with Knowledge Fusion and Distillation
标题:别忘了我:用知识融合和提炼对抗局部过度装配
链接:https://arxiv.org/abs/2507.08686

作者:, Eli Corn, Daphna Weinshall
备注:arXiv admin note: substantial text overlap with arXiv:2412.12968
摘要:Overfitting in deep neural networks occurs less frequently than expected. This is a puzzling observation, as theory predicts that greater model capacity should eventually lead to overfitting -- yet this is rarely seen in practice. But what if overfitting does occur, not globally, but in specific sub-regions of the data space? In this work, we introduce a novel score that measures the forgetting rate of deep models on validation data, capturing what we term local overfitting: a performance degradation confined to certain regions of the input space. We demonstrate that local overfitting can arise even without conventional overfitting, and is closely linked to the double descent phenomenon.   Building on these insights, we introduce a two-stage approach that leverages the training history of a single model to recover and retain forgotten knowledge: first, by aggregating checkpoints into an ensemble, and then by distilling it into a single model of the original size, thus enhancing performance without added inference cost.   Extensive experiments across multiple datasets, modern architectures, and training regimes validate the effectiveness of our approach. Notably, in the presence of label noise, our method -- Knowledge Fusion followed by Knowledge Distillation -- outperforms both the original model and independently trained ensembles, achieving a rare win-win scenario: reduced training and inference complexity.


【3】SFedKD: Sequential Federated Learning with Discrepancy-Aware Multi-Teacher Knowledge Distillation
标题:SFedKD:具有差异感知的多教师知识提炼的顺序联邦学习
链接:https://arxiv.org/abs/2507.08508

作者:u, Jinrui Zhou, Xichong Zhang, Mingjun Xiao, He Sun, Yin Xu
摘要:Federated Learning (FL) is a distributed machine learning paradigm which coordinates multiple clients to collaboratively train a global model via a central server. Sequential Federated Learning (SFL) is a newly-emerging FL training framework where the global model is trained in a sequential manner across clients. Since SFL can provide strong convergence guarantees under data heterogeneity, it has attracted significant research attention in recent years. However, experiments show that SFL suffers from severe catastrophic forgetting in heterogeneous environments, meaning that the model tends to forget knowledge learned from previous clients. To address this issue, we propose an SFL framework with discrepancy-aware multi-teacher knowledge distillation, called SFedKD, which selects multiple models from the previous round to guide the current round of training. In SFedKD, we extend the single-teacher Decoupled Knowledge Distillation approach to our multi-teacher setting and assign distinct weights to teachers' target-class and non-target-class knowledge based on the class distributional discrepancy between teacher and student data. Through this fine-grained weighting strategy, SFedKD can enhance model training efficacy while mitigating catastrophic forgetting. Additionally, to prevent knowledge dilution, we eliminate redundant teachers for the knowledge distillation and formalize it as a variant of the maximum coverage problem. Based on the greedy strategy, we design a complementary-based teacher selection mechanism to ensure that the selected teachers achieve comprehensive knowledge space coverage while reducing communication and computational costs. Extensive experiments show that SFedKD effectively overcomes catastrophic forgetting in SFL and outperforms state-of-the-art FL methods.


聚类(3篇)

【1】Two-cluster test
标题:两群测试
链接:https://arxiv.org/abs/2507.08382

作者:iu, Lianyu Hu, Mudi Jiang, Simen Zhang, Jun Lou, Zengyou He
摘要:Cluster analysis is a fundamental research issue in statistics and machine learning. In many modern clustering methods, we need to determine whether two subsets of samples come from the same cluster. Since these subsets are usually generated by certain clustering procedures, the deployment of classic two-sample tests in this context would yield extremely smaller p-values, leading to inflated Type-I error rate. To overcome this bias, we formally introduce the two-cluster test issue and argue that it is a totally different significance testing issue from conventional two-sample test. Meanwhile, we present a new method based on the boundary points between two subsets to derive an analytical p-value for the purpose of significance quantification. Experiments on both synthetic and real data sets show that the proposed test is able to significantly reduce the Type-I error rate, in comparison with several classic two-sample testing methods. More importantly, the practical usage of such two-cluster test is further verified through its applications in tree-based interpretable clustering and significance-based hierarchical clustering.


【2】CAS Condensed and Accelerated Silhouette: An Efficient Method for Determining the Optimal K in K-Means Clustering
标题:CAS压缩和加速轮廓:一种有效的确定K均值聚类中最优K值的方法
链接:https://arxiv.org/abs/2507.08311

作者:u Das, Sumit Gupta, Awadhesh Kumar
摘要 :Clustering is a critical component of decision-making in todays data-driven environments. It has been widely used in a variety of fields such as bioinformatics, social network analysis, and image processing. However, clustering accuracy remains a major challenge in large datasets. This paper presents a comprehensive overview of strategies for selecting the optimal value of k in clustering, with a focus on achieving a balance between clustering precision and computational efficiency in complex data environments. In addition, this paper introduces improvements to clustering techniques for text and image data to provide insights into better computational performance and cluster validity. The proposed approach is based on the Condensed Silhouette method, along with statistical methods such as Local Structures, Gap Statistics, Class Consistency Ratio, and a Cluster Overlap Index CCR and COIbased algorithm to calculate the best value of k for K-Means clustering. The results of comparative experiments show that the proposed approach achieves up to 99 percent faster execution times on high-dimensional datasets while retaining both precision and scalability, making it highly suitable for real time clustering needs or scenarios demanding efficient clustering with minimal resource utilization.


【3】CoreSPECT: Enhancing Clustering Algorithms via an Interplay of Density and Geometry
标题:CoreSEN:通过密度和几何的相互作用增强集群算法
链接:https://arxiv.org/abs/2507.08243

作者:ekhar Mukherjee, Joonyoung Bae, Jiapeng Zhang
摘要:Density and geometry have long served as two of the fundamental guiding principles in clustering algorithm design, with algorithm usually focusing either on the density structure of the data (e.g., HDBSCAN and Density Peak Clustering) or the complexity of underlying geometry (e.g., manifold clustering algorithms).   In this paper, we identify and formalize a recurring but often overlooked interaction between distribution and geometry and leverage this insight to design our clustering enhancement framework CoreSPECT (Core Space Projection-based Enhancement of Clustering Techniques). Our framework boosts the performance of simple algorithms like K-Means and GMM by applying them to strategically selected regions, then extending the partial partition to a complete partition for the dataset using a novel neighborhood graph based multi-layer propagation procedure.   We apply our framework on 15 datasets from three different domains and obtain consistent and substantial gain in clustering accuracy for both K-Means and GMM. On average, our framework improves the ARI of K-Means by 40% and of GMM by 14%, often surpassing the performance of both manifold-based and recent density-based clustering algorithms. We further support our framework with initial theoretical guarantees, ablation to demonstrate the usefulness of the individual steps and with evidence of robustness to noise.


自动驾驶|车辆|车道检测等(1篇)

【1】STRAP: Spatial-Temporal Risk-Attentive Vehicle Trajectory Prediction for Autonomous Driving
标题:TRAP:用于自动驾驶的时空风险注意车辆轨迹预测
链接:https://arxiv.org/abs/2507.08563

作者:g, Zilin Bian, Kaan Ozbay, Semiha Ergan
备注:6 pages, 3 figures, accepted at ITSC 2025
摘要:Accurate vehicle trajectory prediction is essential for ensuring safety and efficiency in fully autonomous driving systems. While existing methods primarily focus on modeling observed motion patterns and interactions with other vehicles, they often neglect the potential risks posed by the uncertain or aggressive behaviors of surrounding vehicles. In this paper, we propose a novel spatial-temporal risk-attentive trajectory prediction framework that incorporates a risk potential field to assess perceived risks arising from behaviors of nearby vehicles. The framework leverages a spatial-temporal encoder and a risk-attentive feature fusion decoder to embed the risk potential field into the extracted spatial-temporal feature representations for trajectory prediction. A risk-scaled loss function is further designed to improve the prediction accuracy of high-risk scenarios, such as short relative spacing. Experiments on the widely used NGSIM and HighD datasets demonstrate that our method reduces average prediction errors by 4.8% and 31.2% respectively compared to state-of-the-art approaches, especially in high-risk scenarios. The proposed framework provides interpretable, risk-aware predictions, contributing to more robust decision-making for autonomous driving systems.


点云|SLAM|雷达|激光|深度RGBD相关(1篇)

【1】Data Depth as a Risk
标题:数据深度是一种风险
链接:https://arxiv.org/abs/2507.08518

作者:stellanos, Pavlo Mozharovskyi
摘要:Data depths are score functions that quantify in an unsupervised fashion how central is a point inside a distribution, with numerous applications such as anomaly detection, multivariate or functional data analysis, arising across various fields. The halfspace depth was the first depth to aim at generalising the notion of quantile beyond the univariate case. Among the existing variety of depth definitions, it remains one of the most used notions of data depth. Taking a different angle from the quantile point of view, we show that the halfspace depth can also be regarded as the minimum loss of a set of classifiers for a specific labelling of the points. By changing the loss or the set of classifiers considered, this new angle naturally leads to a family of "loss depths", extending to well-studied classifiers such as, e.g., SVM or logistic regression, among others. This framework directly inherits computational efficiency of existing machine learning algorithms as well as their fast statistical convergence rates, and opens the data depth realm to the high-dimensional setting. Furthermore, the new loss depths highlight a connection between the dataset and the right amount of complexity or simplicity of the classifiers. The simplicity of classifiers as well as the interpretation as a risk makes our new kind of data depth easy to explain, yet efficient for anomaly detection, as is shown by experiments.


联邦学习|隐私保护|加密(1篇)

【1】Towards Collaborative Fairness in Federated Learning Under Imbalanced Covariate Shift
标题:非平衡协变量漂移下联邦学习中的协作公平性
链接:https://arxiv.org/abs/2507.08617

作者:u, Jiaqi Wang, Haoyu Wang, Mingquan Lin, Han Liu, Nelson S. Yee, Fenglong Ma
备注:18 pages, accepted to the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD' 25), Toronto, Canada, August 3-7 2025
摘要:Collaborative fairness is a crucial challenge in federated learning. However, existing approaches often overlook a practical yet complex form of heterogeneity: imbalanced covariate shift. We provide a theoretical analysis of this setting, which motivates the design of FedAKD (Federated Asynchronous Knowledge Distillation)- simple yet effective approach that balances accurate prediction with collaborative fairness. FedAKD consists of client and server updates. In the client update, we introduce a novel asynchronous knowledge distillation strategy based on our preliminary analysis, which reveals that while correctly predicted samples exhibit similar feature distributions across clients, incorrectly predicted samples show significant variability. This suggests that imbalanced covariate shift primarily arises from misclassified samples. Leveraging this insight, our approach first applies traditional knowledge distillation to update client models while keeping the global model fixed. Next, we select correctly predicted high-confidence samples and update the global model using these samples while keeping client models fixed. The server update simply aggregates all client models. We further provide a theoretical proof of FedAKD's convergence. Experimental results on public datasets (FashionMNIST and CIFAR10) and a real-world Electronic Health Records (EHR) dataset demonstrate that FedAKD significantly improves collaborative fairness, enhances predictive accuracy, and fosters client participation even under highly heterogeneous data distributions.


推理|分析|理解|解释(6篇)

【1】Evaluating SAE interpretability without explanations
标题:在没有解释的情况下评估严重不良事件的可解释性
链接:https://arxiv.org/abs/2507.08473

作者:aulo, Nora Belrose
摘要:Sparse autoencoders (SAEs) and transcoders have become important tools for machine learning interpretability. However, measuring how interpretable they are remains challenging, with weak consensus about which benchmarks to use. Most evaluation procedures start by producing a single-sentence explanation for each latent. These explanations are then evaluated based on how well they enable an LLM to predict the activation of a latent in new contexts. This method makes it difficult to disentangle the explanation generation and evaluation process from the actual interpretability of the latents discovered. In this work, we adapt existing methods to assess the interpretability of sparse coders, with the advantage that they do not require generating natural language explanations as an intermediate step. This enables a more direct and potentially standardized assessment of interpretability. Furthermore, we compare the scores produced by our interpretability metrics with human evaluations across similar tasks and varying setups, offering suggestions for the community on improving the evaluation of these techniques.


【2】Why this and not that? A Logic-based Framework for Contrastive Explanations
标题:为什么是这个而不是那个?基于逻辑的对比解释框架
链接:https://arxiv.org/abs/2507.08454

作者:ibinger, Reijo Jaakkola, Antti Kuusisto, Xinghan Liu, Miikka Vilander
备注:20 pages, accepted to JELIA 2025
摘要:We define several canonical problems related to contrastive explanations, each answering a question of the form ''Why P but not Q?''. The problems compute causes for both P and Q, explicitly comparing their differences. We investigate the basic properties of our definitions in the setting of propositional logic. We show, inter alia, that our framework captures a cardinality-minimal version of existing contrastive explanations in the literature. Furthermore, we provide an extensive analysis of the computational complexities of the problems. We also implement the problems for CNF-formulas using answer set programming and present several examples demonstrating how they work in practice.


【3】M2-Reasoning: Empowering MLLMs with Unified General and Spatial Reasoning
标题:M2-Reasoning:通过统一的一般和空间推理增强MLLM
链接:https://arxiv.org/abs/2507.08306

作者: AI: Fudong Wang, Jiajia Liu, Jingdong Chen, Jun Zhou, Kaixiang Ji, Lixiang Ru, Qingpei Guo, Ruobing Zheng, Tianqi Li, Yi Yuan, Yifan Mao, Yuting Xiao, Ziping Ma
备注:31pages, 14 figures
摘要:Recent advancements in Multimodal Large Language Models (MLLMs), particularly through Reinforcement Learning with Verifiable Rewards (RLVR), have significantly enhanced their reasoning abilities. However, a critical gap persists: these models struggle with dynamic spatial interactions, a capability essential for real-world applications. To bridge this gap, we introduce M2-Reasoning-7B, a model designed to excel in both general and spatial reasoning. Our approach integrates two key innovations: (1) a novel data pipeline that generates 294.2K high-quality data samples (168K for cold-start fine-tuning and 126.2K for RLVR), which feature logically coherent reasoning trajectories and have undergone comprehensive assessment; and (2) a dynamic multi-task training strategy with step-wise optimization to mitigate conflicts between data, and task-specific rewards for delivering tailored incentive signals. This combination of curated data and advanced training allows M2-Reasoning-7B to set a new state-of-the-art (SOTA) across 8 benchmarks, showcasing superior performance in both general and spatial reasoning domains.


【4】Simple Mechanistic Explanations for Out-Of-Context Reasoning
标题:脱离上下文推理的简单机械解释
链接:https://arxiv.org/abs/2507.08218

作者:ang, Joshua Engels, Oliver Clive-Griffin
备注:ICML 2025 Workshop R2-FM
摘要 :Out-of-context reasoning (OOCR) is a phenomenon in which fine-tuned LLMs exhibit surprisingly deep out-of-distribution generalization. Rather than learning shallow heuristics, they implicitly internalize and act on the consequences of observations scattered throughout the fine-tuning data. In this work, we investigate this phenomenon mechanistically and find that many instances of OOCR in the literature have a simple explanation: the LoRA fine-tuning essentially adds a constant steering vector, steering the model towards a general concept. This improves performance on the fine-tuning task and in many other concept-related domains, causing the surprising generalization. Moreover, we can directly train steering vectors for these tasks from scratch, which also induces OOCR. We find that our results hold even for a task that seems like it must involve conditional behavior (model backdoors); it turns out that unconditionally adding a steering vector is sufficient. Overall, our work presents one explanation of what gets learned during fine-tuning for OOCR tasks, contributing to the key question of why LLMs can reason out of context, an advanced capability that is highly relevant to their safe and reliable deployment.


【5】CTRLS: Chain-of-Thought Reasoning via Latent State-Transition
标题:CTTLR:通过潜在状态转变进行思想链推理
链接:https://arxiv.org/abs/2507.08182

作者: Yuxin Xiong, Xintong Li, Zhengmian Hu, Tong Yu, Rui Wang, Xiang Chen, Jingbo Shang, Julian McAuley
备注:10 pages
摘要:Chain-of-thought (CoT) reasoning enables large language models (LLMs) to break down complex problems into interpretable intermediate steps, significantly enhancing model transparency and performance in reasoning tasks. However, conventional CoT methods rely on heuristic sampling without structured modeling of reasoning transitions, constraining their ability to systematically explore and discover diverse and effective reasoning trajectories. In this work, we introduce CTRLS, a framework that formulates CoT reasoning as a Markov decision process (MDP) with latent state transitions, enabling principled and state-aware exploration via distributional reinforcement learning. By modelling reasoning actions as explicit probability distributions in latent space, our approach explicitly models epistemic uncertainty, facilitating robust exploration of the reasoning space. As part of our framework, we introduce an on-policy reinforcement learning strategy incorporating epsilon-greedy exploration and entropy-based regularization to iteratively refine latent state transitions without requiring additional fine-tuning of the underlying LLM. Theoretical analyses provide evidence lower bounds (ELBO), theoretically grounding our transition-aware modeling of latent reasoning dynamics. Further experiments demonstrate improvements in reasoning accuracy, diversity, and exploration efficiency across benchmark reasoning tasks.


【6】Lightweight Cloud Masking Models for On-Board Inference in Hyperspectral Imaging
标题:用于高光谱成像机载推理的轻量级云遮蔽模型
链接:https://arxiv.org/abs/2507.08052

作者:, António Pereira, Fabio Gentile, Aser Cortines, Sam Mugel, Román Orús, Stelios P. Neophytides, Michalis Mavrovouniotis
摘要:Cloud and cloud shadow masking is a crucial preprocessing step in hyperspectral satellite imaging, enabling the extraction of high-quality, analysis-ready data. This study evaluates various machine learning approaches, including gradient boosting methods such as XGBoost and LightGBM as well as convolutional neural networks (CNNs). All boosting and CNN models achieved accuracies exceeding 93%. Among the investigated models, the CNN with feature reduction emerged as the most efficient, offering a balance of high accuracy, low storage requirements, and rapid inference times on both CPUs and GPUs. Variations of this version, with only up to 597 trainable parameters, demonstrated the best trade-off in terms of deployment feasibility, accuracy, and computational efficiency. These results demonstrate the potential of lightweight artificial intelligence (AI) models for real-time hyperspectral image processing, supporting the development of on-board satellite AI systems for space-based applications.


检测相关(4篇)

【1】ADAPT: A Pseudo-labeling Approach to Combat Concept Drift in Malware Detection
标题:ADAPT:一种对抗恶意软件检测中概念漂移的伪标签方法
链接:https://arxiv.org/abs/2507.08597

作者:ul Alam, Aritran Piplai, Nidhi Rastogi
摘要:Machine learning models are commonly used for malware classification; however, they suffer from performance degradation over time due to concept drift. Adapting these models to changing data distributions requires frequent updates, which rely on costly ground truth annotations. While active learning can reduce the annotation burden, leveraging unlabeled data through semi-supervised learning remains a relatively underexplored approach in the context of malware detection. In this research, we introduce \texttt{ADAPT}, a novel pseudo-labeling semi-supervised algorithm for addressing concept drift. Our model-agnostic method can be applied to various machine learning models, including neural networks and tree-based algorithms. We conduct extensive experiments on five diverse malware detection datasets spanning Android, Windows, and PDF domains. The results demonstrate that our method consistently outperforms baseline models and competitive benchmarks. This work paves the way for more effective adaptation of machine learning models to concept drift in malware detection.


【2】CircFormerMoE: An End-to-End Deep Learning Framework for Circular RNA Splice Site Detection and Pairing in Plant Genomes
标题:CircFormerMoE:一个用于植物基因组中环状RNA剪接位点检测和配对的端到端深度学习框架
链接:https://arxiv.org/abs/2507.08542

作者:iang
摘要 :Circular RNAs (circRNAs) are important components of the non-coding RNA regulatory network. Previous circRNA identification primarily relies on high-throughput RNA sequencing (RNA-seq) data combined with alignment-based algorithms that detect back-splicing signals. However, these methods face several limitations: they can't predict circRNAs directly from genomic DNA sequences and relies heavily on RNA experimental data; they involve high computational costs due to complex alignment and filtering steps; and they are inefficient for large-scale or genome-wide circRNA prediction. The challenge is even greater in plants, where plant circRNA splice sites often lack the canonical GT-AG motif seen in human mRNA splicing, and no efficient deep learning model with strong generalization capability currently exists. Furthermore, the number of currently identified plant circRNAs is likely far lower than their true abundance. In this paper, we propose a deep learning framework named CircFormerMoE based on transformers and mixture-of experts for predicting circRNAs directly from plant genomic DNA. Our framework consists of two subtasks known as splicing site detection (SSD) and splicing site pairing (SSP). The model's effectiveness has been validated on gene data of 10 plant species. Trained on known circRNA instances, it is also capable of discovering previously unannotated circRNAs. In addition, we performed interpretability analyses on the trained model to investigate the sequence patterns contributing to its predictions. Our framework provides a fast and accurate computational method and tool for large-scale circRNA discovery in plants, laying a foundation for future research in plant functional genomics and non-coding RNA annotation.


【3】Rethinking Spatio-Temporal Anomaly Detection: A Vision for Causality-Driven Cybersecurity
标题:重新思考时空异常检测:意外驱动的网络安全愿景
链接:https://arxiv.org/abs/2507.08177

作者:esh Malarkkan, Haoyue Bai, Xinyuan Wang, Anjali Kaushik, Dongjie Wang, Yanjie Fu
备注:5 pages, 1 figure, Under Review in Vision Paper Track-ACM SIGSPATIAL 2025
摘要:As cyber-physical systems grow increasingly interconnected and spatially distributed, ensuring their resilience against evolving cyberattacks has become a critical priority. Spatio-Temporal Anomaly detection plays an important role in ensuring system security and operational integrity. However, current data-driven approaches, largely driven by black-box deep learning, face challenges in interpretability, adaptability to distribution shifts, and robustness under evolving system dynamics. In this paper, we advocate for a causal learning perspective to advance anomaly detection in spatially distributed infrastructures that grounds detection in structural cause-effect relationships. We identify and formalize three key directions: causal graph profiling, multi-view fusion, and continual causal graph learning, each offering distinct advantages in uncovering dynamic cause-effect structures across time and space. Drawing on real-world insights from systems such as water treatment infrastructures, we illustrate how causal models provide early warning signals and root cause attribution, addressing the limitations of black-box detectors. Looking ahead, we outline the future research agenda centered on multi-modality, generative AI-driven, and scalable adaptive causal frameworks. Our objective is to lay a new research trajectory toward scalable, adaptive, explainable, and spatially grounded anomaly detection systems. We hope to inspire a paradigm shift in cybersecurity research, promoting causality-driven approaches to address evolving threats in interconnected infrastructures.


【4】Emotion Detection in Older Adults Using Physiological Signals from Wearable Sensors
标题:使用可穿戴传感器的生理信号检测老年人的情绪
链接:https://arxiv.org/abs/2507.08167

作者:Hassan Onim, Andrew M. Kiselica, Himanshu Thapliyal
摘要:Emotion detection in older adults is crucial for understanding their cognitive and emotional well-being, especially in hospital and assisted living environments. In this work, we investigate an edge-based, non-obtrusive approach to emotion identification that uses only physiological signals obtained via wearable sensors. Our dataset includes data from 40 older individuals. Emotional states were obtained using physiological signals from the Empatica E4 and Shimmer3 GSR+ wristband and facial expressions were recorded using camera-based emotion recognition with the iMotion's Facial Expression Analysis (FEA) module. The dataset also contains twelve emotion categories in terms of relative intensities. We aim to study how well emotion recognition can be accomplished using simply physiological sensor data, without the requirement for cameras or intrusive facial analysis. By leveraging classical machine learning models, we predict the intensity of emotional responses based on physiological signals. We achieved the highest 0.782 r2 score with the lowest 0.0006 MSE on the regression task. This method has significant implications for individuals with Alzheimer's Disease and Related Dementia (ADRD), as well as veterans coping with Post-Traumatic Stress Disorder (PTSD) or other cognitive impairments. Our results across multiple classical regression models validate the feasibility of this method, paving the way for privacy-preserving and efficient emotion recognition systems in real-world settings.


分类|识别(2篇)

【1】Emotion Recognition in Older Adults with Quantum Machine Learning and Wearable Sensors
标题:利用量子机器学习和可穿戴传感器进行老年人的情绪识别
链接:https://arxiv.org/abs/2507.08175

作者:Hassan Onim, Travis S. Humble, Himanshu Thapliyal
摘要:We investigate the feasibility of inferring emotional states exclusively from physiological signals, thereby presenting a privacy-preserving alternative to conventional facial recognition techniques. We conduct a performance comparison of classical machine learning algorithms and hybrid quantum machine learning (QML) methods with a quantum kernel-based model. Our results indicate that the quantum-enhanced SVM surpasses classical counterparts in classification performance across all emotion categories, even when trained on limited datasets. The F1 scores over all classes are over 80% with around a maximum of 36% improvement in the recall values. The integration of wearable sensor data with quantum machine learning not only enhances accuracy and robustness but also facilitates unobtrusive emotion recognition. This methodology holds promise for populations with impaired communication abilities, such as individuals with Alzheimer's Disease and Related Dementias (ADRD) and veterans with Post-Traumatic Stress Disorder (PTSD). The findings establish an early foundation for passive emotional monitoring in clinical and assisted living conditions.


【2】A Hybrid Multilayer Extreme Learning Machine for Image Classification with an Application to Quadcopters
标题:图像分类混合多层极限学习机及其在四轴飞行器中的应用
链接:https://arxiv.org/abs/2507.08047

作者:.Hernandez-Hernandez, Adrian Rubio-Solis
备注:22 pages, 10 figures, 3 tables
摘要 :Multilayer Extreme Learning Machine (ML-ELM) and its variants have proven to be an effective technique for the classification of different natural signals such as audio, video, acoustic and images. In this paper, a Hybrid Multilayer Extreme Learning Machine (HML-ELM) that is based on ELM-based autoencoder (ELM-AE) and an Interval Type-2 fuzzy Logic theory is suggested for active image classification and applied to Unmanned Aerial Vehicles (UAVs). The proposed methodology is a hierarchical ELM learning framework that consists of two main phases: 1) self-taught feature extraction and 2) supervised feature classification. First, unsupervised multilayer feature encoding is achieved by stacking a number of ELM-AEs, in which input data is projected into a number of high-level representations. At the second phase, the final features are classified using a novel Simplified Interval Type-2 Fuzzy ELM (SIT2-FELM) with a fast output reduction layer based on the SC algorithm; an improved version of the algorithm Center of Sets Type Reducer without Sorting Requirement (COSTRWSR). To validate the efficiency of the HML-ELM, two types of experiments for the classification of images are suggested. First, the HML-ELM is applied to solve a number of benchmark problems for image classification. Secondly, a number of real experiments to the active classification and transport of four different objects between two predefined locations using a UAV is implemented. Experiments demonstrate that the proposed HML-ELM delivers a superior efficiency compared to other similar methodologies such as ML-ELM, Multilayer Fuzzy Extreme Learning Machine (ML-FELM) and ELM.


表征(1篇)

【1】The Non-Linear Representation Dilemma: Is Causal Abstraction Enough for Mechanistic Interpretability?
标题:非线性表示困境:因果抽象足以实现机械解释吗?
链接:https://arxiv.org/abs/2507.08802

作者:ter, Julian Minder, Thomas Hofmann, Tiago Pimentel
备注:42 pages, 17 figures, code available in this http URL
摘要:The concept of causal abstraction got recently popularised to demystify the opaque decision-making processes of machine learning models; in short, a neural network can be abstracted as a higher-level algorithm if there exists a function which allows us to map between them. Notably, most interpretability papers implement these maps as linear functions, motivated by the linear representation hypothesis: the idea that features are encoded linearly in a model's representations. However, this linearity constraint is not required by the definition of causal abstraction. In this work, we critically examine the concept of causal abstraction by considering arbitrarily powerful alignment maps. In particular, we prove that under reasonable assumptions, any neural network can be mapped to any algorithm, rendering this unrestricted notion of causal abstraction trivial and uninformative. We complement these theoretical findings with empirical evidence, demonstrating that it is possible to perfectly map models to algorithms even when these models are incapable of solving the actual task; e.g., on an experiment using randomly initialised language models, our alignment maps reach 100% interchange-intervention accuracy on the indirect object identification task. This raises the non-linear representation dilemma: if we lift the linearity constraint imposed to alignment maps in causal abstraction analyses, we are left with no principled way to balance the inherent trade-off between these maps' complexity and accuracy. Together, these results suggest an answer to our title's question: causal abstraction is not enough for mechanistic interpretability, as it becomes vacuous without assumptions about how models encode information. Studying the connection between this information-encoding assumption and causal abstraction should lead to exciting future work.


编码器(1篇)

【1】AbbIE: Autoregressive Block-Based Iterative Encoder for Efficient Sequence Modeling
标题:AbbIE:用于高效序列建模的自回归基于块的迭代编码器
链接:https://arxiv.org/abs/2507.08567

作者:leksandrov, Meghdad Kurmanji, Fernando Garcia Redondo, David O'Shea, William Shen, Alex Iacob, Lorenzo Sani, Xinchi Qiu, Nicola Cancedda, Nicholas D. Lane
备注:14 pages and 6 figures. Submitted to NeurIPS 2025
摘要:We introduce the Autoregressive Block-Based Iterative Encoder (AbbIE), a novel recursive generalization of the encoder-only Transformer architecture, which achieves better perplexity than a standard Transformer and allows for the dynamic scaling of compute resources at test time. This simple, recursive approach is a complement to scaling large language model (LLM) performance through parameter and token counts. AbbIE performs its iterations in latent space, but unlike latent reasoning models, does not require a specialized dataset or training protocol. We show that AbbIE upward generalizes (ability to generalize to arbitrary iteration lengths) at test time by only using 2 iterations during train time, far outperforming alternative iterative methods. AbbIE's ability to scale its computational expenditure based on the complexity of the task gives it an up to \textbf{12\%} improvement in zero-shot in-context learning tasks versus other iterative and standard methods and up to 5\% improvement in language perplexity. The results from this study open a new avenue to Transformer performance scaling. We perform all of our evaluations on model sizes up to 350M parameters.


优化|敛散性(6篇)

【1】Greedy Low-Rank Gradient Compression for Distributed Learning with Convergence Guarantees
标题:具有收敛保证的分布式学习贪婪低等级梯度压缩
链接:https://arxiv.org/abs/2507.08784

作者:en, Yutong He, Pengrui Li, Weichen Jia, Kun Yuan
备注:18 pages, 5 figures
摘要:Distributed optimization is pivotal for large-scale signal processing and machine learning, yet communication overhead remains a major bottleneck. Low-rank gradient compression, in which the transmitted gradients are approximated by low-rank matrices to reduce communication, offers a promising remedy. Existing methods typically adopt either randomized or greedy compression strategies: randomized approaches project gradients onto randomly chosen subspaces, introducing high variance and degrading empirical performance; greedy methods select the most informative subspaces, achieving strong empirical results but lacking convergence guarantees. To address this gap, we propose GreedyLore--the first Greedy Low-Rank gradient compression algorithm for distributed learning with rigorous convergence guarantees. GreedyLore incorporates error feedback to correct the bias introduced by greedy compression and introduces a semi-lazy subspace update that ensures the compression operator remains contractive throughout all iterations. With these techniques, we prove that GreedyLore achieves a convergence rate of $\mathcal{O}(\sigma/\sqrt{NT} + 1/T)$ under standard optimizers such as MSGD and Adam--marking the first linear speedup convergence rate for low-rank gradient compression. Extensive experiments are conducted to validate our theoretical findings.


【2】Quantile Reward Policy Optimization: Alignment with Pointwise Regression and Exact Partition Functions
标题:分位数奖励政策优化:与逐点回归和精确分区函数保持一致
链接:https://arxiv.org/abs/2507.08068

作者:renok, Skander Moalla, Caglar Gulcehre
摘要:Aligning large language models with pointwise absolute rewards has so far required online, on-policy algorithms such as PPO and GRPO. In contrast, simpler methods that can leverage offline or off-policy data, such as DPO and REBEL, are limited to learning from preference pairs or relative signals. To bridge this gap, we introduce \emph{Quantile Reward Policy Optimization} (QRPO), which learns from pointwise absolute rewards while preserving the simplicity and offline applicability of DPO-like methods. QRPO uses quantile rewards to enable regression to the closed-form solution of the KL-regularized RL objective. This reward yields an analytically tractable partition function, removing the need for relative signals to cancel this term. Moreover, QRPO scales with increased compute to estimate quantile rewards, opening a new dimension for pre-computation scaling. Empirically, QRPO consistently achieves top performance on chat and coding evaluations -- reward model scores, AlpacaEval 2, and LeetCode -- compared to DPO, REBEL, and SimPO across diverse datasets and 8B-scale models. Finally, we find that training with robust rewards instead of converting them to preferences induces less length bias.


【3】Tree-Structured Parzen Estimator Can Solve Black-Box Combinatorial Optimization More Efficiently
标题:树型Parzen估计能更有效地求解黑盒组合优化问题
链接:https://arxiv.org/abs/2507.08053

作者:be, Yunzhuo Wang, Shuhei Watanabe
备注:Submitted to AutoML Conference
摘要:Tree-structured Parzen estimator (TPE) is a versatile hyperparameter optimization (HPO) method supported by popular HPO tools. Since these HPO tools have been developed in line with the trend of deep learning (DL), the problem setups often used in the DL domain have been discussed for TPE such as multi-objective optimization and multi-fidelity optimization. However, the practical applications of HPO are not limited to DL, and black-box combinatorial optimization is actively utilized in some domains, e.g., chemistry and biology. As combinatorial optimization has been an untouched, yet very important, topic in TPE, we propose an efficient combinatorial optimization algorithm for TPE. In this paper, we first generalize the categorical kernel with the numerical kernel in TPE, enabling us to introduce a distance structure to the categorical kernel. Then we discuss modifications for the newly developed kernel to handle a large combinatorial search space. These modifications reduce the time complexity of the kernel calculation with respect to the size of a combinatorial search space. In the experiments using synthetic problems, we verified that our proposed method identifies better solutions with fewer evaluations than the original TPE. Our algorithm is available in Optuna, an open-source framework for HPO.


【4】Quantum Algorithms for Projection-Free Sparse Convex Optimization
标题:无投影稀疏凸优化的量子算法
链接:https://arxiv.org/abs/2507.08543

作者:e, John C.S. Lui
摘要:This paper considers the projection-free sparse convex optimization problem for the vector domain and the matrix domain, which covers a large number of important applications in machine learning and data science. For the vector domain $\mathcal{D} \subset \mathbb{R}^d$, we propose two quantum algorithms for sparse constraints that finds a $\varepsilon$-optimal solution with the query complexity of $O(\sqrt{d}/\varepsilon)$ and $O(1/\varepsilon)$ by using the function value oracle, reducing a factor of $O(\sqrt{d})$ and $O(d)$ over the best classical algorithm, respectively, where $d$ is the dimension. For the matrix domain $\mathcal{D} \subset \mathbb{R}^{d\times d}$, we propose two quantum algorithms for nuclear norm constraints that improve the time complexity to $\tilde{O}(rd/\varepsilon^2)$ and $\tilde{O}(\sqrt{r}d/\varepsilon^3)$ for computing the update step, reducing at least a factor of $O(\sqrt{d})$ over the best classical algorithm, where $r$ is the rank of the gradient matrix. Our algorithms show quantum advantages in projection-free sparse convex optimization problems as they outperform the optimal classical methods in dependence on the dimension $d$.


【5】Optimal and Practical Batched Linear Bandit Algorithm
标题:最优实用的批处理线性Bandit算法
链接:https://arxiv.org/abs/2507.08438

作者:Yu, Min-hwan Oh
备注:Accepted at ICML 2025
摘要:We study the linear bandit problem under limited adaptivity, known as the batched linear bandit. While existing approaches can achieve near-optimal regret in theory, they are often computationally prohibitive or underperform in practice. We propose \texttt{BLAE}, a novel batched algorithm that integrates arm elimination with regularized G-optimal design, achieving the minimax optimal regret (up to logarithmic factors in $T$) in both large-$K$ and small-$K$ regimes for the first time, while using only $O(\log\log T)$ batches. Our analysis introduces new techniques for batch-wise optimal design and refined concentration bounds. Crucially, \texttt{BLAE} demonstrates low computational overhead and strong empirical performance, outperforming state-of-the-art methods in extensive numerical evaluations. Thus, \texttt{BLAE} is the first algorithm to combine provable minimax-optimality in all regimes and practical superiority in batched linear bandits.


【6】Dual-Attention U-Net++ with Class-Specific Ensembles and Bayesian Hyperparameter Optimization for Precise Wound and Scale Marker Segmentation
标题:双注意U-Net++,具有特定类别的集成和Bayesian超参数优化,用于精确的伤口和鳞片标记分割
链接:https://arxiv.org/abs/2507.05314

作者:eślak, Miriam Reca, Olena Onyshchenko, Jacek Rumiński
备注:11 pages, conference: Joint 20th Nordic-Baltic Conference on   Biomedical Engineering & 24th Polish Conference on Biocybernetics and   Biomedical Engineering; 6 figures, 2 tables, 11 sources
摘要 :Accurate segmentation of wounds and scale markers in clinical images remainsa significant challenge, crucial for effective wound management and automatedassessment. In this study, we propose a novel dual-attention U-Net++ archi-tecture, integrating channel-wise (SCSE) and spatial attention mechanisms toaddress severe class imbalance and variability in medical images effectively.Initially, extensive benchmarking across diverse architectures and encoders via 5-fold cross-validation identified EfficientNet-B7 as the optimal encoder backbone.Subsequently, we independently trained two class-specific models with tailoredpreprocessing, extensive data augmentation, and Bayesian hyperparameter tun-ing (WandB sweeps). The final model ensemble utilized Test Time Augmentationto further enhance prediction reliability. Our approach was evaluated on a bench-mark dataset from the NBC 2025 & PCBBE 2025 competition. Segmentationperformance was quantified using a weighted F1-score (75% wounds, 25% scalemarkers), calculated externally by competition organizers on undisclosed hard-ware. The proposed approach achieved an F1-score of 0.8640, underscoring itseffectiveness for complex medical segmentation tasks.


预测|估计(3篇)

【1】SynBridge: Bridging Reaction States via Discrete Flow for Bidirectional Reaction Prediction
标题:SynBridge:通过离散流桥连反应状态以进行双向反应预测
链接:https://arxiv.org/abs/2507.08475

作者:n, Junjie Wang, Zhifeng Gao, Xiaohong Ji, Rong Zhu, Linfeng Zhang, Guolin Ke, Weinan E
备注:22pages, 2 figures
摘要:The essence of a chemical reaction lies in the redistribution and reorganization of electrons, which is often manifested through electron transfer or the migration of electron pairs. These changes are inherently discrete and abrupt in the physical world, such as alterations in the charge states of atoms or the formation and breaking of chemical bonds. To model the transition of states, we propose SynBridge, a bidirectional flow-based generative model to achieve multi-task reaction prediction. By leveraging a graph-to-graph transformer network architecture and discrete flow bridges between any two discrete distributions, SynBridge captures bidirectional chemical transformations between graphs of reactants and products through the bonds' and atoms' discrete states. We further demonstrate the effectiveness of our method through extensive experiments on three benchmark datasets (USPTO-50K, USPTO-MIT, Pistachio), achieving state-of-the-art performance in both forward and retrosynthesis tasks. Our ablation studies and noise scheduling analysis reveal the benefits of structured diffusion over discrete spaces for reaction prediction.


【2】Mallows Model with Learned Distance Metrics: Sampling and Maximum Likelihood Estimation
标题:带学习距离的马洛斯模型:抽样和最大似然估计
链接:https://arxiv.org/abs/2507.08108

作者:limohammadi, Kiana Asgari
摘要:\textit{Mallows model} is a widely-used probabilistic framework for learning from ranking data, with applications ranging from recommendation systems and voting to aligning language models with human preferences~\cite{chen2024mallows, kleinberg2021algorithmic, rafailov2024direct}. Under this model, observed rankings are noisy perturbations of a central ranking $\sigma$, with likelihood decaying exponentially in distance from $\sigma$, i.e, $P (\pi) \propto \exp\big(-\beta \cdot d(\pi, \sigma)\big),$ where $\beta > 0$ controls dispersion and $d$ is a distance function.   Existing methods mainly focus on fixed distances (such as Kendall's $\tau$ distance), with no principled approach to learning the distance metric directly from data. In practice, however, rankings naturally vary by context; for instance, in some sports we regularly see long-range swaps (a low-rank team beating a high-rank one), while in others such events are rare. Motivated by this, we propose a generalization of Mallows model that learns the distance metric directly from data. Specifically, we focus on $L_\alpha$ distances: $d_\alpha(\pi,\sigma):=\sum_{i=1} |\pi(i)-\sigma(i)|^\alpha$.   For any $\alpha\geq 1$ and $\beta>0$, we develop a Fully Polynomial-Time Approximation Scheme (FPTAS) to efficiently generate samples that are $\epsilon$- close (in total variation distance) to the true distribution. Even in the special cases of $L_1$ and $L_2$, this generalizes prior results that required vanishing dispersion ($\beta\to0$). Using this sampling algorithm, we propose an efficient Maximum Likelihood Estimation (MLE) algorithm that jointly estimates the central ranking, the dispersion parameter, and the optimal distance metric. We prove strong consistency results for our estimators (for any values of $\alpha$ and $\beta$), and we validate our approach empirically using datasets from sports rankings.


【3】Predicting Flow Dynamics using Diffusion Models
标题:使用扩散模型预测流量动态
链接:https://arxiv.org/abs/2507.08106

作者:achnang, Vismay Churiwala
摘要:In this work, we aimed to replicate and extend the results presented in the DiffFluid paper[1]. The DiffFluid model showed that diffusion models combined with Transformers are capable of predicting fluid dynamics. It uses a denoising diffusion probabilistic model (DDPM) framework to tackle Navier-Stokes and Darcy flow equations. Our goal was to validate the reproducibility of the methods in the DiffFluid paper while testing its viability for other simulation types, particularly the Lattice Boltzmann method. Despite our computational limitations and time constraints, this work provides evidence of the flexibility and potential of the model as a general-purpose solver for fluid dynamics. Our results show both the potential and challenges of applying diffusion models to complex fluid dynamics problems. This work highlights the opportunities for future research in optimizing the computational efficiency and scaling such models in broader domains.


其他神经网络|深度学习|模型|建模(17篇)

【1】NeuralOS: Towards Simulating Operating Systems via Neural Generative Models
标题:NeuralOS:通过神经生成模型模拟操作系统
链接:https://arxiv.org/abs/2507.08800

作者:rd, Sun Sun, Hongyu Guo, Wenhu Chen, Yuntian Deng
摘要 :We introduce NeuralOS, a neural framework that simulates graphical user interfaces (GUIs) of operating systems by directly predicting screen frames in response to user inputs such as mouse movements, clicks, and keyboard events. NeuralOS combines a recurrent neural network (RNN), which tracks computer state, with a diffusion-based neural renderer that generates screen images. The model is trained on a large-scale dataset of Ubuntu XFCE recordings, which include both randomly generated interactions and realistic interactions produced by AI agents. Experiments show that NeuralOS successfully renders realistic GUI sequences, accurately captures mouse interactions, and reliably predicts state transitions like application launches. Although modeling fine-grained keyboard interactions precisely remains challenging, NeuralOS offers a step toward creating fully adaptive, generative neural interfaces for future human-computer interaction systems.


【2】Modeling Partially Observed Nonlinear Dynamical Systems and Efficient Data Assimilation via Discrete-Time Conditional Gaussian Koopman Network
标题:通过离散时间条件高斯Koopman网络对部分观察的非线性动态系统建模和高效数据同化
链接:https://arxiv.org/abs/2507.08749

作者:hen, Zhongrui Wang, Nan Chen, Jin-Long Wu
摘要:A discrete-time conditional Gaussian Koopman network (CGKN) is developed in this work to learn surrogate models that can perform efficient state forecast and data assimilation (DA) for high-dimensional complex dynamical systems, e.g., systems governed by nonlinear partial differential equations (PDEs). Focusing on nonlinear partially observed systems that are common in many engineering and earth science applications, this work exploits Koopman embedding to discover a proper latent representation of the unobserved system states, such that the dynamics of the latent states are conditional linear, i.e., linear with the given observed system states. The modeled system of the observed and latent states then becomes a conditional Gaussian system, for which the posterior distribution of the latent states is Gaussian and can be efficiently evaluated via analytical formulae. The analytical formulae of DA facilitate the incorporation of DA performance into the learning process of the modeled system, which leads to a framework that unifies scientific machine learning (SciML) and data assimilation. The performance of discrete-time CGKN is demonstrated on several canonical problems governed by nonlinear PDEs with intermittency and turbulent features, including the viscous Burgers' equation, the Kuramoto-Sivashinsky equation, and the 2-D Navier-Stokes equations, with which we show that the discrete-time CGKN framework achieves comparable performance as the state-of-the-art SciML methods in state forecast and provides efficient and accurate DA results. The discrete-time CGKN framework also serves as an example to illustrate unifying the development of SciML models and their other outer-loop applications such as design optimization, inverse problems, and optimal control.


【3】Partitioned Hybrid Quantum Fourier Neural Operators for Scientific Quantum Machine Learning
标题:用于科学量子机器学习的分区混合量子傅里叶神经运算符
链接:https://arxiv.org/abs/2507.08746

作者:candelli, Yuanchun He, Stefano Mariani, Martina Siena, Stefano Markidis
摘要:We introduce the Partitioned Hybrid Quantum Fourier Neural Operator (PHQFNO), a generalization of the Quantum Fourier Neural Operator (QFNO) for scientific machine learning. PHQFNO partitions the Fourier operator computation across classical and quantum resources, enabling tunable quantum-classical hybridization and distributed execution across quantum and classical devices. The method extends QFNOs to higher dimensions and incorporates a message-passing framework to distribute data across different partitions. Input data are encoded into quantum states using unary encoding, and quantum circuit parameters are optimized using a variational scheme. We implement PHQFNO using PennyLane with PyTorch integration and evaluate it on Burgers' equation, incompressible and compressible Navier-Stokes equations. We show that PHQFNO recovers classical FNO accuracy. On incompressible Navier-Stokes, PHQFNO achieves higher accuracy than its classical counterparts. Finally, we perform a sensitivity analysis under input noise, confirming improved stability of PHQFNO over classical baselines.


【4】Domain-Informed Operation Excellence of Gas Turbine System with Machine Learning
标题:利用机器学习实现燃气轮机系统的领域知情卓越运营
链接:https://arxiv.org/abs/2507.08697

作者:ammad Ashraf, Amir H. Keshavarzzadeh, Abdulelah S. Alshehri, Abdulrahman bin Jumah, Ramit Debnath, Vivek Dua
摘要:The domain-consistent adoption of artificial intelligence (AI) remains low in thermal power plants due to the black-box nature of AI algorithms and low representation of domain knowledge in conventional data-centric analytics. In this paper, we develop a MAhalanobis Distance-based OPTimization (MAD-OPT) framework that incorporates the Mahalanobis distance-based constraint to introduce domain knowledge into data-centric analytics. The developed MAD-OPT framework is applied to maximize thermal efficiency and minimize turbine heat rate for a 395 MW capacity gas turbine system. We demonstrate that the MAD-OPT framework can estimate domain-informed optimal process conditions under different ambient conditions, and the optimal solutions are found to be robust as evaluated by Monte Carlo simulations. We also apply the MAD-OPT framework to estimate optimal process conditions beyond the design power generation limit of the gas turbine system, and have found comparable results with the actual data of the power plant. We demonstrate that implementing data-centric optimization analytics without incorporating domain-informed constraints may provide ineffective solutions that may not be implementable in the real operation of the gas turbine system. This research advances the integration of the data-driven domain knowledge into machine learning-powered analytics that enhances the domain-informed operation excellence and paves the way for safe AI adoption in thermal power systems.


【5】RTNinja: a generalized machine learning framework for analyzing random telegraph noise signals in nanoelectronic devices
标题:RTNinja:用于分析纳米电子设备中随机电报噪音信号的通用机器学习框架
链接:https://arxiv.org/abs/2507.08424

作者:aranasi, Robin Degraeve, Philippe Roussel, Clement Merckling
摘要 :Random telegraph noise is a prevalent variability phenomenon in nanoelectronic devices, arising from stochastic carrier exchange at defect sites and critically impacting device reliability and performance. Conventional analysis techniques often rely on restrictive assumptions or manual interventions, limiting their applicability to complex, noisy datasets. Here, we introduce RTNinja, a generalized, fully automated machine learning framework for the unsupervised analysis of random telegraph noise signals. RTNinja deconvolves complex signals to identify the number and characteristics of hidden individual sources, without requiring prior knowledge of the system. The framework comprises two modular components: LevelsExtractor, which uses Bayesian inference and model selection to denoise and discretize the signal; and SourcesMapper, which infers source configurations through probabilistic clustering and optimization. To evaluate performance, we developed a Monte Carlo simulator that generates labeled datasets spanning broad signal-to-noise ratios and source complexities; across 7000 such datasets, RTNinja consistently demonstrated high-fidelity signal reconstruction and accurate extraction of source amplitudes and activity patterns. Our results demonstrate that RTNinja offers a robust, scalable, and device-agnostic tool for random telegraph noise characterization, enabling large-scale statistical benchmarking, reliability-centric technology qualification, predictive failure modeling, and device physics exploration in next-generation nanoelectronics.


【6】Advances in Machine Learning: Where Can Quantum Techniques Help?
标题:机器学习的进步:量子技术能提供什么帮助?
链接:https://arxiv.org/abs/2507.08379

作者:ashyap, Rohit K Ramakrishnan, Kumari Jyoti, Apoorva D Patel
备注:28 pages, 1 figure
摘要:Quantum Machine Learning (QML) represents a promising frontier at the intersection of quantum computing and artificial intelligence, aiming to leverage quantum computational advantages to enhance data-driven tasks. This review explores the potential of QML to address the computational bottlenecks of classical machine learning, particularly in processing complex datasets. We introduce the theoretical foundations of QML, including quantum data encoding, quantum learning theory and optimization techniques, while categorizing QML approaches based on data type and computational architecture. It is well-established that quantum computational advantages are problem-dependent, and so potentially useful directions for QML need to be systematically identified. Key developments, such as Quantum Principal Component Analysis, quantum-enhanced sensing and applications in material science, are critically evaluated for their theoretical speed-ups and practical limitations. The challenges posed by Noisy Intermediate-Scale Quantum (NISQ) devices, including hardware noise, scalability constraints and data encoding overheads, are discussed in detail. We also outline future directions, emphasizing the need for quantum-native algorithms, improved error correction, and realistic benchmarks to bridge the gap between theoretical promise and practical deployment. This comprehensive analysis underscores that while QML has significant potential for specific applications such as quantum chemistry and sensing, its broader utility in real-world scenarios remains contingent on overcoming technological and methodological hurdles.


【7】scE$^2$TM: Toward Interpretable Single-Cell Embedding via Topic Modeling
标题:scE $' 2$TM:通过主题建模实现可解释单细胞嵌入
链接:https://arxiv.org/abs/2507.08355

作者:en, Yuyin Lu, Zhiming Dai, Fu Lee Wang, Qing Li, Yanghui Rao
摘要:Recent advances in sequencing technologies have enabled researchers to explore cellular heterogeneity at single-cell resolution. Meanwhile, interpretability has gained prominence parallel to the rapid increase in the complexity and performance of deep learning models. In recent years, topic models have been widely used for interpretable single-cell embedding learning and clustering analysis, which we refer to as single-cell embedded topic models. However, previous studies evaluated the interpretability of the models mainly through qualitative analysis, and these single-cell embedded topic models suffer from the potential problem of interpretation collapse. Furthermore, their neglect of external biological knowledge constrains analytical performance. Here, we present scE2TM, an external knowledge-guided single-cell embedded topic model that provides a high-quality cell embedding and strong interpretation, contributing to comprehensive scRNA-seq data analysis. Our comprehensive evaluation across 20 scRNA-seq datasets demonstrates that scE2TM achieves significant clustering performance gains compared to 7 state-of-the-art methods. In addition, we propose a new interpretability evaluation benchmark that introduces 10 metrics to quantitatively assess the interpretability of single-cell embedded topic models. The results show that the interpretation provided by scE2TM performs encouragingly in terms of diversity and consistency with the underlying biological signals, contributing to a better revealing of the underlying biological mechanisms.


【8】Audio Inpanting using Discrete Diffusion Model
标题:使用离散扩散模型的音频输入
链接:https://arxiv.org/abs/2507.08333

作者:, Iftach Shoham, Moshe Buchris, Oren Gal, Haim Permuter, Gilad Katz, Eliya Nachmani
摘要:Audio inpainting refers to the task of reconstructing missing segments in corrupted audio recordings. While prior approaches-including waveform and spectrogram-based diffusion models-have shown promising results for short gaps, they often degrade in quality when gaps exceed 100 milliseconds (ms). In this work, we introduce a novel inpainting method based on discrete diffusion modeling, which operates over tokenized audio representations produced by a pre-trained audio tokenizer. Our approach models the generative process directly in the discrete latent space, enabling stable and semantically coherent reconstruction of missing audio. We evaluate the method on the MusicNet dataset using both objective and perceptual metrics across gap durations up to 300 ms. We further evaluated our approach on the MTG dataset, extending the gap duration to 500 ms. Experimental results demonstrate that our method achieves competitive or superior performance compared to existing baselines, particularly for longer gaps, offering a robust solution for restoring degraded musical recordings. Audio examples of our proposed method can be found at https://iftach21.github.io/


【9】Physics-Informed Neural Networks with Hard Nonlinear Equality and Inequality Constraints
标题:具有硬非线性等式和不等式约束的物理信息神经网络
链接:https://arxiv.org/abs/2507.08124

作者:takher, Rahul Golder, M. M. Faruque Hasan
备注:20 pages, 8 figures
摘要 :Traditional physics-informed neural networks (PINNs) do not guarantee strict constraint satisfaction. This is problematic in engineering systems where minor violations of governing laws can significantly degrade the reliability and consistency of model predictions. In this work, we develop KKT-Hardnet, a PINN architecture that enforces both linear and nonlinear equality and inequality constraints up to machine precision. It leverages a projection onto the feasible region through solving Karush-Kuhn-Tucker (KKT) conditions of a distance minimization problem. Furthermore, we reformulate the nonlinear KKT conditions using log-exponential transformation to construct a general sparse system with only linear and exponential terms, thereby making the projection differentiable. We apply KKT-Hardnet on both test problems and a real-world chemical process simulation. Compared to multilayer perceptrons and PINNs, KKT-Hardnet achieves higher accuracy and strict constraint satisfaction. This approach allows the integration of domain knowledge into machine learning towards reliable hybrid modeling of complex systems.


【10】Quasi-Random Physics-informed Neural Networks
标题:准随机物理信息神经网络
链接:https://arxiv.org/abs/2507.08121

作者:u, Ivan Oseledets
摘要:Physics-informed neural networks have shown promise in solving partial differential equations (PDEs) by integrating physical constraints into neural network training, but their performance is sensitive to the sampling of points. Based on the impressive performance of quasi Monte-Carlo methods in high dimensional problems, this paper proposes Quasi-Random Physics-Informed Neural Networks (QRPINNs), which use low-discrepancy sequences for sampling instead of random points directly from the domain. Theoretically, QRPINNs have been proven to have a better convergence rate than PINNs. Empirically, experiments demonstrate that QRPINNs significantly outperform PINNs and some representative adaptive sampling methods, especially in high-dimensional PDEs. Furthermore, combining QRPINNs with adaptive sampling can further improve the performance.


【11】PDE-aware Optimizer for Physics-informed Neural Networks
标题:用于物理信息神经网络的PDE感知优化器
链接:https://arxiv.org/abs/2507.08118

作者:ukla, Manurag Khullar, Vismay Churiwala
摘要:Physics-Informed Neural Networks (PINNs) have emerged as a powerful framework for solving partial differential equations (PDEs) by embedding physical constraints into the loss function. However, standard optimizers such as Adam often struggle to balance competing loss terms, particularly in stiff or ill-conditioned systems. In this work, we propose a PDE-aware optimizer that adapts parameter updates based on the variance of per-sample PDE residual gradients. This method addresses gradient misalignment without incurring the heavy computational costs of second-order optimizers such as SOAP. We benchmark the PDE-aware optimizer against Adam and SOAP on 1D Burgers', Allen-Cahn and Korteweg-de Vries(KdV) equations. Across both PDEs, the PDE-aware optimizer achieves smoother convergence and lower absolute errors, particularly in regions with sharp gradients. Our results demonstrate the effectiveness of PDE residual-aware adaptivity in enhancing stability in PINNs training. While promising, further scaling on larger architectures and hardware accelerators remains an important direction for future research.


【12】Towards Evaluating Robustness of Prompt Adherence in Text to Image Models
标题:评估文本对图像模型中即时粘附的鲁棒性
链接:https://arxiv.org/abs/2507.08039

作者:mishetty, Advitiya Arora, Anupama Sharma
摘要:The advancements in the domain of LLMs in recent years have surprised many, showcasing their remarkable capabilities and diverse applications. Their potential applications in various real-world scenarios have led to significant research on their reliability and effectiveness. On the other hand, multimodal LLMs and Text-to-Image models have only recently gained prominence, especially when compared to text-only LLMs. Their reliability remains constrained due to insufficient research on assessing their performance and robustness. This paper aims to establish a comprehensive evaluation framework for Text-to-Image models, concentrating particularly on their adherence to prompts. We created a novel dataset that aimed to assess the robustness of these models in generating images that conform to the specified factors of variation in the input text prompts. Our evaluation studies present findings on three variants of Stable Diffusion models: Stable Diffusion 3 Medium, Stable Diffusion 3.5 Large, and Stable Diffusion 3.5 Large Turbo, and two variants of Janus models: Janus Pro 1B and Janus Pro 7B. We introduce a pipeline that leverages text descriptions generated by the gpt-4o model for our ground-truth images, which are then used to generate artificial images by passing these descriptions to the Text-to-Image models. We then pass these generated images again through gpt-4o using the same system prompt and compare the variation between the two descriptions. Our results reveal that these models struggle to create simple binary images with only two factors of variation: a simple geometric shape and its location. We also show, using pre-trained VAEs on our dataset, that they fail to generate images that follow our input dataset distribution.


【13】Assessing the Capabilities and Limitations of FinGPT Model in Financial NLP Applications
标题:评估FinGPT模型在金融NLP应用中的能力和局限性
链接:https://arxiv.org/abs/2507.08015

作者:Djagba, Chimezie A. Odinakachukwu
摘要:This work evaluates FinGPT, a financial domain-specific language model, across six key natural language processing (NLP) tasks: Sentiment Analysis, Text Classification, Named Entity Recognition, Financial Question Answering, Text Summarization, and Stock Movement Prediction. The evaluation uses finance-specific datasets to assess FinGPT's capabilities and limitations in real-world financial applications. The results show that FinGPT performs strongly in classification tasks such as sentiment analysis and headline categorization, often achieving results comparable to GPT-4. However, its performance is significantly lower in tasks that involve reasoning and generation, such as financial question answering and summarization. Comparisons with GPT-4 and human benchmarks highlight notable performance gaps, particularly in numerical accuracy and complex reasoning. Overall, the findings indicate that while FinGPT is effective for certain structured financial tasks, it is not yet a comprehensive solution. This research provides a useful benchmark for future research and underscores the need for architectural improvements and domain-specific optimization in financial language models.


【14】Entangled Threats: A Unified Kill Chain Model for Quantum Machine Learning Security
标题:纠缠威胁:量子机器学习安全的统一杀死链模型
链接:https://arxiv.org/abs/2507.08623

作者:bus, Maximilian Wendlinger, Kilian Tscharke, Daniel Herr, Cedric Brügmann, Daniel Ohl de Mello, Juris Ulmanis, Alexander Erhard, Arthur Schmidt, Fabian Petsch
备注:Accepted for publication at IEEE International Conference on Quantum Computing and Engineering (QCE) 2025
摘要:Quantum Machine Learning (QML) systems inherit vulnerabilities from classical machine learning while introducing new attack surfaces rooted in the physical and algorithmic layers of quantum computing. Despite a growing body of research on individual attack vectors - ranging from adversarial poisoning and evasion to circuit-level backdoors, side-channel leakage, and model extraction - these threats are often analyzed in isolation, with unrealistic assumptions about attacker capabilities and system environments. This fragmentation hampers the development of effective, holistic defense strategies. In this work, we argue that QML security requires more structured modeling of the attack surface, capturing not only individual techniques but also their relationships, prerequisites, and potential impact across the QML pipeline. We propose adapting kill chain models, widely used in classical IT and cybersecurity, to the quantum machine learning context. Such models allow for structured reasoning about attacker objectives, capabilities, and possible multi-stage attack paths - spanning reconnaissance, initial access, manipulation, persistence, and exfiltration. Based on extensive literature analysis, we present a detailed taxonomy of QML attack vectors mapped to corresponding stages in a quantum-aware kill chain framework that is inspired by the MITRE ATLAS for classical machine learning. We highlight interdependencies between physical-level threats (like side-channel leakage and crosstalk faults), data and algorithm manipulation (such as poisoning or circuit backdoors), and privacy attacks (including model extraction and training data inference). This work provides a foundation for more realistic threat modeling and proactive security-in-depth design in the emerging field of quantum machine learning.


【15】MIRRAMS: Towards Training Models Robust to Missingness Distribution Shifts
标题:镜子:迈向对失踪分布变化稳健的训练模型
链接:https://arxiv.org/abs/2507.08280

作者:, Minseo Kang, Dongha Kim
摘要:In real-world data analysis, missingness distributional shifts between training and test input datasets frequently occur, posing a significant challenge to achieving robust prediction performance. In this study, we propose a novel deep learning framework designed to address such shifts in missingness distributions. We begin by introducing a set of mutual information-based conditions, called MI robustness conditions, which guide a prediction model to extract label-relevant information while remaining invariant to diverse missingness patterns, thereby enhancing robustness to unseen missingness scenarios at test-time. To make these conditions practical, we propose simple yet effective techniques to derive loss terms corresponding to each and formulate a final objective function, termed MIRRAMS(Mutual Information Regularization for Robustness Against Missingness Shifts). As a by-product, our analysis provides a theoretical interpretation of the principles underlying consistency regularization-based semi-supervised learning methods, such as FixMatch. Extensive experiments across various benchmark datasets show that MIRRAMS consistently outperforms existing baselines and maintains stable performance across diverse missingness scenarios. Moreover, our approach achieves state-of-the-art performance even without missing data and can be naturally extended to address semi-supervised learning tasks, highlighting MIRRAMS as a powerful, off-the-shelf framework for general-purpose learning.


【16】Parametrized Quantum Circuit Learning for Quantum Chemical Applications
标题:量子化学应用的参数化量子电路学习
链接:https://arxiv.org/abs/2507.08183

作者:Jones, Viki Kumar Prasad, Ulrich Fekl, Hans-Arno Jacobsen
摘要:In the field of quantum machine learning (QML), parametrized quantum circuits (PQCs) -- constructed using a combination of fixed and tunable quantum gates -- provide a promising hybrid framework for tackling complex machine learning problems. Despite numerous proposed applications, there remains limited exploration of datasets relevant to quantum chemistry. In this study, we investigate the potential benefits and limitations of PQCs on two chemically meaningful datasets: (1) the BSE49 dataset, containing bond separation energies for 49 different classes of chemical bonds, and (2) a dataset of water conformations, where coupled-cluster singles and doubles (CCSD) wavefunctions are predicted from lower-level electronic structure methods using the data-driven coupled-cluster (DDCC) approach. We construct a comprehensive set of 168 PQCs by combining 14 data encoding strategies with 12 variational ans{\"a}tze, and evaluate their performance on circuits with 5 and 16 qubits. Our initial analysis examines the impact of circuit structure on model performance using state-vector simulations. We then explore how circuit depth and training set size influence model performance. Finally, we assess the performance of the best-performing PQCs on current quantum hardware, using both noisy simulations ("fake" backends) and real quantum devices. Our findings underscore the challenges of applying PQCs to chemically relevant problems that are straightforward for classical machine learning methods but remain non-trivial for quantum approaches.


【17】CLEAR: Calibrated Learning for Epistemic and Aleatoric Risk
标题:清晰:针对认识和妄想风险的校准学习
链接:https://arxiv.org/abs/2507.08150

作者:i, Juraj Bodik, Jakob Heiss, Bin Yu
备注:Code: this https URL
摘要:Accurate uncertainty quantification is critical for reliable predictive modeling, especially in regression tasks. Existing methods typically address either aleatoric uncertainty from measurement noise or epistemic uncertainty from limited data, but not necessarily both in a balanced way. We propose CLEAR, a calibration method with two distinct parameters, $\gamma_1$ and $\gamma_2$, to combine the two uncertainty components for improved conditional coverage. CLEAR is compatible with any pair of aleatoric and epistemic estimators; we show how it can be used with (i) quantile regression for aleatoric uncertainty and (ii) ensembles drawn from the Predictability-Computability-Stability (PCS) framework for epistemic uncertainty. Across 17 diverse real-world datasets, CLEAR achieves an average improvement of 28.2% and 17.4% in the interval width compared to the two individually calibrated baselines while maintaining nominal coverage. This improvement can be particularly evident in scenarios dominated by either high epistemic or high aleatoric uncertainty.


其他(17篇)

【1】Filter Equivariant Functions: A symmetric account of length-general extrapolation on lists
标题:过滤器等变函数:列表上长度一般外推的对称解释
链接:https://arxiv.org/abs/2507.08796

作者:s, Neil Ghani, Andrew Dudzik, Christos Perivolaropoulos, Razvan Pascanu, Petar Veličković
备注:18 pages, 2 figures
摘要:What should a function that extrapolates beyond known input/output examples look like? This is a tricky question to answer in general, as any function matching the outputs on those examples can in principle be a correct extrapolant. We argue that a "good" extrapolant should follow certain kinds of rules, and here we study a particularly appealing criterion for rule-following in list functions: that the function should behave predictably even when certain elements are removed. In functional programming, a standard way to express such removal operations is by using a filter function. Accordingly, our paper introduces a new semantic class of functions -- the filter equivariant functions. We show that this class contains interesting examples, prove some basic theorems about it, and relate it to the well-known class of map equivariant functions. We also present a geometric account of filter equivariants, showing how they correspond naturally to certain simplicial structures. Our highlight result is the amalgamation algorithm, which constructs any filter-equivariant function's output by first studying how it behaves on sublists of the input, in a way that extrapolates perfectly.


【2】BlockFFN: Towards End-Side Acceleration-Friendly Mixture-of-Experts with Chunk-Level Activation Sparsity
标题:BlockFFN:迈向具有区块级激活稀疏性的端端加速友好型专家混合
链接:https://arxiv.org/abs/2507.08771

作者:Song, Weilin Zhao, Xu Han, Chaojun Xiao, Yingfa Chen, Yuxuan Li, Zhiyuan Liu, Maosong Sun
备注:21 pages, 7 figures, 15 tables
摘要:To alleviate the computational burden of large language models (LLMs), architectures with activation sparsity, represented by mixture-of-experts (MoE), have attracted increasing attention. However, the non-differentiable and inflexible routing of vanilla MoE hurts model performance. Moreover, while each token activates only a few parameters, these sparsely-activated architectures exhibit low chunk-level sparsity, indicating that the union of multiple consecutive tokens activates a large ratio of parameters. Such a sparsity pattern is unfriendly for acceleration under low-resource conditions (e.g., end-side devices) and incompatible with mainstream acceleration techniques (e.g., speculative decoding). To address these challenges, we introduce a novel MoE architecture, BlockFFN, as well as its efficient training and deployment techniques. Specifically, we use a router integrating ReLU activation and RMSNorm for differentiable and flexible routing. Next, to promote both token-level sparsity (TLS) and chunk-level sparsity (CLS), CLS-aware training objectives are designed, making BlockFFN more acceleration-friendly. Finally, we implement efficient acceleration kernels, combining activation sparsity and speculative decoding for the first time. The experimental results demonstrate the superior performance of BlockFFN over other MoE baselines, achieving over 80% TLS and 70% 8-token CLS. Our kernels achieve up to 3.67$\times$ speedup on real end-side devices than dense models. All codes and checkpoints are available publicly (https://github.com/thunlp/BlockFFN).


【3】Hashing for Fast Pattern Set Selection
标题:快速模式集选择的哈希算法
链接:https://arxiv.org/abs/2507.08745

作者:jalainen, Pauli Miettinen
备注:17 pages, 5 figures, to appear at ECML-PKDD 2025
摘要:Pattern set mining, which is the task of finding a good set of patterns instead of all patterns, is a fundamental problem in data mining. Many different definitions of what constitutes a good set have been proposed in recent years. In this paper, we consider the reconstruction error as a proxy measure for the goodness of the set, and concentrate on the adjacent problem of how to find a good set efficiently. We propose a method based on bottom-k hashing for efficiently selecting the set and extend the method for the common case where the patterns might only appear in approximate form in the data. Our approach has applications in tiling databases, Boolean matrix factorization, and redescription mining, among others. We show that our hashing-based approach is significantly faster than the standard greedy algorithm while obtaining almost equally good results in both synthetic and real-world data sets.


【4】Catastrophic Forgetting Mitigation Through Plateau Phase Activity Profiling
标题:通过高原阶段活动分析减轻灾难性遗忘
链接:https://arxiv.org/abs/2507.08736

作者:iach, Oren Glickman, Tom Tirer
摘要:Catastrophic forgetting in deep neural networks occurs when learning new tasks degrades performance on previously learned tasks due to knowledge overwriting. Among the approaches to mitigate this issue, regularization techniques aim to identify and constrain "important" parameters to preserve previous knowledge. In the highly nonconvex optimization landscape of deep learning, we propose a novel perspective: tracking parameters during the final training plateau is more effective than monitoring them throughout the entire training process. We argue that parameters that exhibit higher activity (movement and variability) during this plateau reveal directions in the loss landscape that are relatively flat, making them suitable for adaptation to new tasks while preserving knowledge from previous ones. Our comprehensive experiments demonstrate that this approach achieves superior performance in balancing catastrophic forgetting mitigation with strong performance on newly learned tasks.


【5】On the Effect of Regularization in Policy Mirror Descent
标题:论政策镜像下降中的规范化效应
链接:https://arxiv.org/abs/2507.08718

作者: Kleuker, Aske Plaat, Thomas Moerland
备注:Accepted at RLC
摘要:Policy Mirror Descent (PMD) has emerged as a unifying framework in reinforcement learning (RL) by linking policy gradient methods with a first-order optimization method known as mirror descent. At its core, PMD incorporates two key regularization components: (i) a distance term that enforces a trust region for stable policy updates and (ii) an MDP regularizer that augments the reward function to promote structure and robustness. While PMD has been extensively studied in theory, empirical investigations remain scarce. This work provides a large-scale empirical analysis of the interplay between these two regularization techniques, running over 500k training seeds on small RL environments. Our results demonstrate that, although the two regularizers can partially substitute each other, their precise combination is critical for achieving robust performance. These findings highlight the potential for advancing research on more robust algorithms in RL, particularly with respect to hyperparameter sensitivity.


【6】The Impact of Automatic Speech Transcription on Speaker Attribution
标题:自动语音转录对说话者归因的影响
链接:https://arxiv.org/abs/2507.08660

作者:Aggazzotti, Matthew Wiesner, Elizabeth Allyn Smith, Nicholas Andrews
摘要:Speaker attribution from speech transcripts is the task of identifying a speaker from the transcript of their speech based on patterns in their language use. This task is especially useful when the audio is unavailable (e.g. deleted) or unreliable (e.g. anonymized speech). Prior work in this area has primarily focused on the feasibility of attributing speakers using transcripts produced by human annotators. However, in real-world settings, one often only has more errorful transcripts produced by automatic speech recognition (ASR) systems. In this paper, we conduct what is, to our knowledge, the first comprehensive study of the impact of automatic transcription on speaker attribution performance. In particular, we study the extent to which speaker attribution performance degrades in the face of transcription errors, as well as how properties of the ASR system impact attribution. We find that attribution is surprisingly resilient to word-level transcription errors and that the objective of recovering the true transcript is minimally correlated with attribution performance. Overall, our findings suggest that speaker attribution on more errorful transcripts produced by ASR is as good, if not better, than attribution based on human-transcribed data, possibly because ASR transcription errors can capture speaker-specific features revealing of speaker identity.


【7】Scaling Attention to Very Long Sequences in Linear Time with Wavelet-Enhanced Random Spectral Attention (WERSA)
标题:利用子波增强随机谱注意力(WERSA)将注意力扩展到线性时间中的超长序列
链接:https://arxiv.org/abs/2507.08637

作者:Dentamaro
备注:10 pages, 1 figure
摘要:Transformer models are computationally costly on long sequences since regular attention has quadratic $O(n^2)$ time complexity. We introduce Wavelet-Enhanced Random Spectral Attention (WERSA), a novel mechanism of linear $O(n)$ time complexity that is pivotal to enable successful long-sequence processing without the performance trade-off. WERSA merges content-adaptive random spectral features together with multi-resolution Haar wavelets and learnable parameters to selectively attend to informative scales of data while preserving linear efficiency.   Large-scale comparisons \textbf{on single GPU} and across various benchmarks (vision, NLP, hierarchical reasoning) and various attention mechanisms (like Multiheaded Attention, Flash-Attention-2, FNet, Linformer, Performer, Waveformer), reveal uniform advantages of WERSA. It achieves best accuracy in all tests. On ArXiv classification, WERSA improves accuracy over vanilla attention by 1.2\% (86.2\% vs 85.0\%) while cutting training time by 81\% (296s vs 1554s) and FLOPS by 73.4\% (26.2G vs 98.4G). Significantly, WERSA excels where vanilla and FlashAttention-2 fail: on ArXiv-128k's extremely lengthy sequences, it achieves best accuracy (79.1\%) and AUC (0.979) among viable methods, operating on data that gives Out-Of-Memory errors to quadratic methods while being \textbf{twice as fast} as Waveformer, its next-best competitor.   By significantly reducing computational loads without compromising accuracy, WERSA makes possible more practical, more affordable, long-context models, in particular on low-resource hardware, for more sustainable and more scalable AI development.


【8】Emergent Natural Language with Communication Games for Improving Image Captioning Capabilities without Additional Data
标题:带有通信游戏的新兴自然语言,无需额外数据即可提高图像字幕能力
链接:https://arxiv.org/abs/2507.08610

作者:ta, Ambedkar Dukkipati
摘要:Image captioning is an important problem in developing various AI systems, and these tasks require large volumes of annotated images to train the models. Since all existing labelled datasets are already used for training the large Vision Language Models (VLMs), it becomes challenging to improve the performance of the same. Considering this, it is essential to consider the unsupervised image captioning performance, which remains relatively under-explored. To that end, we propose LoGIC (Lewis Communication Game for Image Captioning), a Multi-agent Reinforcement Learning game. The proposed method consists of two agents, a 'speaker' and a 'listener', with the objective of learning a strategy for communicating in natural language. We train agents in the cooperative common-reward setting using the GRPO algorithm and show that improvement in image captioning performance emerges as a consequence of the agents learning to play the game. We show that using pre-trained VLMs as the 'speaker' and Large Language Model (LLM) for language understanding in the 'listener', we achieved a $46$ BLEU score after fine-tuning using LoGIC without additional labels, a $2$ units advantage in absolute metrics compared to the $44$ BLEU score of the vanilla VLM. Additionally, we replace the VLM from the 'speaker' with lightweight components: (i) a ViT for image perception and (ii) a GPT2 language generation, and train them from scratch using LoGIC, obtaining a $31$ BLEU score in the unsupervised setting, a $10$ points advantage over existing unsupervised image-captioning methods.


【9】Remote Sensing Reveals Adoption of Sustainable Rice Farming Practices Across Punjab, India
标题:遥感揭示印度旁遮普邦采用可持续水稻种植实践
链接:https://arxiv.org/abs/2507.08605

作者 :, Rajveer Singh, Akram Zaytar, Girmaw Abebe Tadesse, Caleb Robinson, Negar Tafti, Stephen A. Wood, Rahul Dodhia, Juan M. Lavista Ferres
备注:Dataset and code will be published shortly and links updated in v2
摘要:Rice cultivation consumes 24-30% of global freshwater, creating critical water management challenges in major rice-producing regions. Sustainable irrigation practices like direct seeded rice (DSR) and alternate wetting and drying (AWD) can reduce water use by 20-40% while maintaining yields, helping secure long-term agricultural productivity as water scarcity intensifies - a key component of the Zero Hunger Sustainable Development Goal. However, limited data on adoption rates of these practices prevents evidence-based policymaking and targeted resource allocation. We developed a novel remote sensing framework to monitor sustainable water management practices at scale in Punjab, India - a region facing severe groundwater depletion of 41.6 cm/year. To collect essential ground truth data, we partnered with the Nature Conservancy's Promoting Regenerative and No-burn Agriculture (PRANA) program, which trained approximately 1,400 farmers on water-saving techniques while documenting their field-level practices. Using this data, we created a classification system with Sentinel-1 satellite imagery that separates water management along sowing and irrigation dimensions. Our approach achieved a 78% F1-score in distinguishing DSR from traditional puddled transplanted rice without requiring prior knowledge of planting dates. We demonstrated scalability by mapping DSR adoption across approximately 3 million agricultural plots in Punjab, with district-level predictions showing strong correlation (Pearson=0.77, RBO= 0.77) with government records. This study provides policymakers with a powerful tool to track sustainable water management adoption, target interventions, and measure program impacts at scale.


【10】Recursive Reward Aggregation
标题:递进奖励聚合
链接:https://arxiv.org/abs/2507.08537

作者:ng, Yivan Zhang, Johannes Ackermann, Yu-Jie Zhang, Soichiro Nishimori, Masashi Sugiyama
备注:Reinforcement Learning Conference 2025
摘要:In reinforcement learning (RL), aligning agent behavior with specific objectives typically requires careful design of the reward function, which can be challenging when the desired objectives are complex. In this work, we propose an alternative approach for flexible behavior alignment that eliminates the need to modify the reward function by selecting appropriate reward aggregation functions. By introducing an algebraic perspective on Markov decision processes (MDPs), we show that the Bellman equations naturally emerge from the recursive generation and aggregation of rewards, allowing for the generalization of the standard discounted sum to other recursive aggregations, such as discounted max and Sharpe ratio. Our approach applies to both deterministic and stochastic settings and integrates seamlessly with value-based and actor-critic algorithms. Experimental results demonstrate that our approach effectively optimizes diverse objectives, highlighting its versatility and potential for real-world applications.


【11】Ranked Set Sampling-Based Multilayer Perceptron: Improving Generalization via Variance-Based Bounds
标题:基于秩集采样的多层感知器:通过基于方差的边界提高泛化能力
链接:https://arxiv.org/abs/2507.08465

作者:Li, Liuya Zhang, Jieting Wang, Tao Yan, Yuhua Qian
摘要:Multilayer perceptron (MLP), one of the most fundamental neural networks, is extensively utilized for classification and regression tasks. In this paper, we establish a new generalization error bound, which reveals how the variance of empirical loss influences the generalization ability of the learning model. Inspired by this learning bound, we advocate to reduce the variance of empirical loss to enhance the ability of MLP. As is well-known, bagging is a popular ensemble method to realize variance reduction. However, bagging produces the base training data sets by the Simple Random Sampling (SRS) method, which exhibits a high degree of randomness. To handle this issue, we introduce an ordered structure in the training data set by Rank Set Sampling (RSS) to further reduce the variance of loss and develop a RSS-MLP method. Theoretical results show that the variance of empirical exponential loss and the logistic loss estimated by RSS are smaller than those estimated by SRS, respectively. To validate the performance of RSS-MLP, we conduct comparison experiments on twelve benchmark data sets in terms of the two convex loss functions under two fusion methods. Extensive experimental results and analysis illustrate the effectiveness and rationality of the propose method.


【12】Space filling positionality and the Spiroformer
标题:空间填充位置和Spiroformer
链接:https://arxiv.org/abs/2507.08456

作者:, M.Á. Evangelista-Alvarado, P. Suárez-Serrato
备注:9 pages, 5 figures. To appear in Geometric Science of Information 2025
摘要:Transformers excel when dealing with sequential data. Generalizing transformer models to geometric domains, such as manifolds, we encounter the problem of not having a well-defined global order. We propose a solution with attention heads following a space-filling curve. As a first experimental example, we present the Spiroformer, a transformer that follows a polar spiral on the $2$-sphere.


【13】Towards AI-Native RAN: An Operator's Perspective of 6G Day 1 Standardization
标题:迈向AI原生RAN:运营商对6G第一天标准化的看法
链接:https://arxiv.org/abs/2507.08403

作者:i Sun, Lehan Wang, Xiaofei Xu, Jinri Huang, Chunhui Liu, Jing Gao, Yuhong Huang, Chih-Lin I
摘要 :Artificial Intelligence/Machine Learning (AI/ML) has become the most certain and prominent feature of 6G mobile networks. Unlike 5G, where AI/ML was not natively integrated but rather an add-on feature over existing architecture, 6G shall incorporate AI from the onset to address its complexity and support ubiquitous AI applications. Based on our extensive mobile network operation and standardization experience from 2G to 5G, this paper explores the design and standardization principles of AI-Native radio access networks (RAN) for 6G, with a particular focus on its critical Day 1 architecture, functionalities and capabilities. We investigate the framework of AI-Native RAN and present its three essential capabilities to shed some light on the standardization direction; namely, AI-driven RAN processing/optimization/automation, reliable AI lifecycle management (LCM), and AI-as-a-Service (AIaaS) provisioning. The standardization of AI-Native RAN, in particular the Day 1 features, including an AI-Native 6G RAN architecture, were proposed. For validation, a large-scale field trial with over 5000 5G-A base stations have been built and delivered significant improvements in average air interface latency, root cause identification, and network energy consumption with the proposed architecture and the supporting AI functions. This paper aims to provide a Day 1 framework for 6G AI-Native RAN standardization design, balancing technical innovation with practical deployment.


【14】Exploring Gender Differences in Chronic Pain Discussions on Reddit
标题:探索Reddit上慢性疼痛讨论中的性别差异
链接:https://arxiv.org/abs/2507.08241

作者:ria Andrade, Tanvi Banerjee, Ramakrishna Mundugar
备注:This is an extended version of the short paper accepted at ASONAM 2025
摘要:Pain is an inherent part of human existence, manifesting as both physical and emotional experiences, and can be categorized as either acute or chronic. Over the years, extensive research has been conducted to understand the causes of pain and explore potential treatments, with contributions from various scientific disciplines. However, earlier studies often overlooked the role of gender in pain experiences. In this study, we utilized Natural Language Processing (NLP) to analyze and gain deeper insights into individuals' pain experiences, with a particular focus on gender differences. We successfully classified posts into male and female corpora using the Hidden Attribute Model-Convolutional Neural Network (HAM-CNN), achieving an F1 score of 0.86 by aggregating posts based on usernames. Our analysis revealed linguistic differences between genders, with female posts tending to be more emotionally focused. Additionally, the study highlighted that conditions such as migraine and sinusitis are more prevalent among females and explored how pain medication affects individuals differently based on gender.


【15】Just Read the Question: Enabling Generalization to New Assessment Items with Text Awareness
标题:只需阅读问题:通过文本感知实现对新评估项目的概括
链接:https://arxiv.org/abs/2507.08154

作者:an, Nathaniel Li, Tori Shen, Anna N. Rafferty
备注:Poster paper at Educational Data Mining (EDM) 2025
摘要:Machine learning has been proposed as a way to improve educational assessment by making fine-grained predictions about student performance and learning relationships between items. One challenge with many machine learning approaches is incorporating new items, as these approaches rely heavily on historical data. We develop Text-LENS by extending the LENS partial variational auto-encoder for educational assessment to leverage item text embeddings, and explore the impact on predictive performance and generalization to previously unseen items. We examine performance on two datasets: Eedi, a publicly available dataset that includes item content, and LLM-Sim, a novel dataset with test items produced by an LLM. We find that Text-LENS matches LENS' performance on seen items and improves upon it in a variety of conditions involving unseen items; it effectively learns student proficiency from and makes predictions about student performance on new items.


【16】Low-rank Momentum Factorization for Memory Efficient Training
标题:低等级动量分解用于记忆高效训练
链接:https://arxiv.org/abs/2507.08091

作者:hdavinia, Mehrdad Mahdavi
摘要:Fine-tuning large foundation models presents significant memory challenges due to stateful optimizers like AdamW, often requiring several times more GPU memory than inference. While memory-efficient methods like parameter-efficient fine-tuning (e.g., LoRA) and optimizer state compression exist, recent approaches like GaLore bridge these by using low-rank gradient projections and subspace moment accumulation. However, such methods may struggle with fixed subspaces or computationally costly offline resampling (e.g., requiring full-matrix SVDs). We propose Momentum Factorized SGD (MoFaSGD), which maintains a dynamically updated low-rank SVD representation of the first-order momentum, closely approximating its full-rank counterpart throughout training. This factorization enables a memory-efficient fine-tuning method that adaptively updates the optimization subspace at each iteration. Crucially, MoFaSGD leverages the computed low-rank momentum factors to perform efficient spectrally normalized updates, offering an alternative to subspace moment accumulation. We establish theoretical convergence guarantees for MoFaSGD, proving it achieves an optimal rate for non-convex stochastic optimization under standard assumptions. Empirically, we demonstrate MoFaSGD's effectiveness on large language model alignment benchmarks, achieving a competitive trade-off between memory reduction (comparable to LoRA) and performance compared to state-of-the-art low-rank optimization methods. Our implementation is available at https://github.com/pmahdavi/MoFaSGD.


【17】Entity-Specific Cyber Risk Assessment using InsurTech Empowered Risk Factors
标题:使用InsurTech授权的风险因素进行针对青少年的网络风险评估
链接:https://arxiv.org/abs/2507.08193

作者:, Zhiyun Quan, Linfeng Zhang
摘要 :The lack of high-quality public cyber incident data limits empirical research and predictive modeling for cyber risk assessment. This challenge persists due to the reluctance of companies to disclose incidents that could damage their reputation or investor confidence. Therefore, from an actuarial perspective, potential resolutions conclude two aspects: the enhancement of existing cyber incident datasets and the implementation of advanced modeling techniques to optimize the use of the available data. A review of existing data-driven methods highlights a significant lack of entity-specific organizational features in publicly available datasets. To address this gap, we propose a novel InsurTech framework that enriches cyber incident data with entity-specific attributes. We develop various machine learning (ML) models: a multilabel classification model to predict the occurrence of cyber incident types (e.g., Privacy Violation, Data Breach, Fraud and Extortion, IT Error, and Others) and a multioutput regression model to estimate their annual frequencies. While classifier and regressor chains are implemented to explore dependencies among cyber incident types as well, no significant correlations are observed in our datasets. Besides, we apply multiple interpretable ML techniques to identify and cross-validate potential risk factors developed by InsurTech across ML models. We find that InsurTech empowered features enhance prediction occurrence and frequency estimation robustness compared to only using conventional risk factors. The framework generates transparent, entity-specific cyber risk profiles, supporting customized underwriting and proactive cyber risk mitigation. It provides insurers and organizations with data-driven insights to support decision-making and compliance planning.


机器翻译由腾讯交互翻译提供,仅供参考

点击“阅读原文”获取带摘要的学术速递

Python社区是高质量的Python/Django开发社区
本文地址:http://www.python88.com/topic/184336
 
153 次点击