文档大纲

ARXIV CS OS 202512 SUMMARY

arXiv cs.OS 20251201 到 20251231 论文分析报告

arXiv cs.OS 20251201 到 20251231 论文分析报告

📊 数据统计概览

📈基本统计

  • 论文总数: 20
  • 分析分类: cs.OS
  • 时间范围: 20251201 到 20251231
  • 独立作者数: 123

👥高产作者 Top 10

  1. Haibo Chen (2 篇)
  2. Sina Abdollahi (1 篇)
  3. Amir Al Sadi (1 篇)
  4. Marios Kogias (1 篇)
  5. David Kotz (1 篇)
  6. Hamed Haddadi (1 篇)
  7. Kaiwei Tu (1 篇)
  8. Kan Wu (1 篇)
  9. Andrea C. Arpaci-Dusseau (1 篇)
  10. Remzi H. Arpaci-Dusseau (1 篇)

🔍热门关键词 Top 10

  1. data (12 次)
  2. memory (8 次)
  3. workloads (7 次)
  4. storage (6 次)
  5. modern (6 次)
  6. operating (6 次)
  7. hardware (6 次)
  8. gpu (6 次)
  9. due (5 次)
  10. libraries (5 次)

🤖 AI 深度分析

arXiv cs.OS Research Analysis

A Deep Dive into Operating Systems Trends from December 2025

Executive Summary

This report analyzes 20 papers from the arXiv cs.OS category published between December 1 and December 31, 2025. The analysis reveals a significant trend: the deep integration of Large Language Models (LLMs) and AI/ML techniques into core operating system design, verification, and optimization. Over a quarter of all research focuses on either leveraging LLMs to build better systems or redesigning systems to more efficiently serve LLM workloads. Traditional strongholds like real-time systems, security, and virtualization remain highly active, but are increasingly influenced by AI and the demands of accelerated hardware. The landscape suggests a paradigm shift where OS development is becoming less about manual heuristics and more about data-driven, generative, and verifiable approaches.

2. Author & Collaboration Analysis

The author graph reveals several clusters of collaboration, with some individuals acting as bridges between topics. A few key patterns emerge:

  • High-Impact Groups: The collaboration between Andrea and Remzi Arpaci-Dusseau on storage systems is a continuation of foundational work from their well-known research group. Similarly, Haibo Chen is a prolific author, appearing on two distinct but related papers concerning generative file systems and GPU scheduling, indicating a large and influential lab focused on next-generation OS design.
  • Large Collaborations: The paper VLCs: Managing Parallelism with Virtualized Libraries features a very large author list, typical of projects tackling complex, multi-layered system problems that require expertise from hardware architecture to runtime systems.
  • Bridging Topics: The work on Trustworthy LLM utilization with TEEs connects the "Confidential Computing" and "AI/ML for Systems" communities, showing a trend towards securing AI pipelines.

Author Collaboration Graph

graph TD; subgraph "AI/ML for Systems" A1["Q. Liu"] -- "SYSSPEC" --- B1["M. Zou"]; A1 -- "SYSSPEC" --- C1["H. Chen"]; B1 -- "SYSSPEC" --- C1; D1["W. Shen"] -- "MSched" --- C1; E1["R. Dwivedula"] -- "Vulcan" --- F1["D. Saxena"]; G1["S. Feng"] -- "EVICPRESS" --- H1["J. Jiang"]; G1 -- "EVICPRESS" --- I1["G. Ananthanarayanan"]; J1["C. Yang"] -- "VeruSAGE" --- K1["S. Lu"]; J1 -- "VeruSAGE" --- L1["C. Hawblitzel"]; M1["M. Nasirzadeh"] -- "CoLog" --- N1["J. Tahmoresnezhad"]; end
graph TD; subgraph "Real-Time Systems" A2["M. Völp"] -- "Event-Triggered" --- B2["G. Fohler"]; C2["N. Chen"] -- "LEFT-RS" --- D2["X. Dai"]; C2 -- "LEFT-RS" --- E2["A. Burns"]; F2["H. Takahashi"] -- "Prob. Analysis" --- G2["T. Azumi"]; end
graph TD; subgraph "Security & Virtualization" A3["S. Abdollahi"] -- "Confidential Comm" --- B3["M. Kogias"]; C3["R. Ishii"] -- "NecoFuzz" --- D3["T. Shinagawa"]; E3["G. Zou"] -- "ZeroOS" --- F3["D. Boneh"]; E3 -- "ZeroOS" --- G3["D. L. -K. Wong"]; H3["A. Avina"] -- "pokiSEC" --- I3["N. K. Chaudhary"]; end
graph TD; subgraph "Storage & Performance" A4["K. Tu"] -- "MOST" --- B4["A. Arpaci-Dusseau"]; A4 -- "MOST" --- C4["R. Arpaci-Dusseau"]; B4 -- "MOST" --- C4; D4["Z. Chen"] -- "Perf-Tunability" --- E4["T. Xu"]; F4["A. Paznikov"] -- "Race Detection" --- G4["M. Schwarz"]; end
graph TD; subgraph "GPU & Extensibility" C1 -- "GPU Policies" --- H4["Y. Zheng"]; H4 -- "GPU Policies" --- I4["D. Williams"]; end C1 -- "Connects AI & GPU" --> H4; B3 -- "Connects Security & LLMs" --- J4("Y. Cai"); J4 -- "TEE-GPU" --- K4("Y. Guo"); classDef aiml fill:#e3f2fd,stroke:#90caf9,stroke-width:2px; classDef rt fill:#e8f5e9,stroke:#a5d6a7,stroke-width:2px; classDef sec fill:#fce4ec,stroke:#f8bbd0,stroke-width:2px; classDef perf fill:#fff3e0,stroke:#ffcc80,stroke-width:2px; classDef gpu fill:#ede7f6,stroke:#b39ddb,stroke-width:2px; class A1,B1,C1,D1,E1,F1,G1,H1,I1,J1,K1,L1,M1,N1 aiml; class A2,B2,C2,D2,E2,F2,G2 rt; class A3,B3,C3,D3,E3,F3,G3,H3,I3,J4,K4 sec; class A4,B4,C4,D4,E4,F4,G4 perf; class H4,I4 gpu;

3. Technical Innovation Summary

Key Technological Breakthroughs

  • Generative Systems: The concept of generating and evolving core OS components like file systems from high-level specifications using LLMs (SYSSPEC) is a radical departure from traditional development, promising to reduce maintenance overhead and accelerate feature deployment.
  • AI-Synthesized Heuristics: Moving beyond hand-tuned parameters, systems like Vulcan demonstrate that LLMs can search for and discover heuristics (for scheduling, caching, etc.) that are optimal for a specific workload and hardware instance.
  • Extensible GPU Policies via eBPF: The application of eBPF to GPUs (gpu_ext) provides a powerful, safe, and programmable mechanism to customize GPU resource management without modifying kernel drivers, unlocking new potential for multi-tenant GPU environments.
  • Unified OS for zkVMs: ZeroOS provides a foundational software layer for verifiable computation, abstracting away the complexities of different zero-knowledge virtual machines and making it easier to run existing applications in a verifiable manner.

Methodology and Application Innovations

  • Compiler-Integrated Redundancy Elimination: The work on Compiling Away the Overhead of Race Detection shows a promising direction where static analysis at compile-time can drastically reduce the runtime cost of dynamic analysis tools, making them more practical for production use.
  • Joint Optimization for AI Serving: EVICPRESS's approach of *jointly* considering compression and eviction for KV caches is a key methodological insight, showing that local, greedy decisions are suboptimal in complex, resource-constrained systems.
  • Expanded Application Domains: This cohort of papers extends OS research into new domains. The primary example is the intense focus on LLM infrastructure, but also includes verifiable computation for blockchains/privacy and multi-architecture sandboxing for modern DevSecOps pipelines.

Appendix: Full Paper List

Title Primary Focus Key Authors
Confidential, Attestable, and Efficient Inter-CVM Communication with Arm CCAConfidential ComputingS. Abdollahi, M. Kogias, D. Kotz
Getting the MOST out of your Storage Hierarchy with Mirror-Optimized Storage TieringStorage SystemsK. Tu, A. Arpaci-Dusseau, R. Arpaci-Dusseau
VLCs: Managing Parallelism with Virtualized LibrariesParallelism/ConcurrencyY. Yan, K. Pingali, C. J. Rossbach
Defending Event-Triggered Systems against Out-of-Envelope EnvironmentsReal-Time SystemsM. Völp, G. Fohler
LEFT-RS: A Lock-Free Fault-Tolerant Resource Sharing Protocol for Multicore Real-Time SystemsReal-Time SystemsN. Chen, X. Dai, A. Burns
Compiling Away the Overhead of Race DetectionParallelism/ConcurrencyA. Paznikov, M. Schwarz
Accelerating Probabilistic Response-Time AnalysisReal-Time SystemsH. Takahashi, T. Azumi
Reexamining Paradigms of End-to-End Data MovementNetworkingC. Fang, T. Stitt
Principled Performance Tunability in Operating System KernelsOS Kernel/PerformanceZ. Chen, T. Xu
A unified framework for detecting point and collective anomalies in operating system logsAI/ML for SystemsM. Nasirzadeh, J. Tahmoresnezhad
Sharpen the Spec, Cut the Code: A Case for Generative File System with SYSSPECAI/ML for SystemsQ. Liu, D. Du, H. Chen
Towards Fully-fledged GPU Multitasking via Proactive Memory SchedulingGPU ManagementW. Shen, R. Chen, H. Chen
NecoFuzz: Effective Fuzzing of Nested Virtualization via Fuzz-Harness Virtual MachinesSecurity/VirtualizationR. Ishii, T. Shinagawa
EVICPRESS: Joint KV-Cache Compression and Eviction for Efficient LLM ServingAI/ML for SystemsS. Feng, G. Ananthanarayanan, J. Jiang
Trustworthy and Controllable Professional Knowledge Utilization in LLMs with TEE-GPU ExecutionConfidential Computing, AI/MLY. Cai, Y. Guo, D. Li
Vulcan: Instance-Optimal Systems Heuristics Through LLM-Driven SearchAI/ML for SystemsR. Dwivedula, D. Kim, A. Akella
pokiSEC: A Multi-Architecture, Containerized Ephemeral Malware Detonation SandboxSecurity/VirtualizationA. Avina, N. K. Chaudhary
ZeroOS: A Universal Modular Library OS for zkVMsSecurity/VirtualizationG. Zou, D. Boneh, D. L. -K. Wong
gpu_ext: Extensible OS Policies for GPUs via eBPFGPU Management, OS KernelY. Zheng, D. Williams, A. Quinn
VeruSAGE: A Study of Agent-Based Verification for Rust SystemsAI/ML for SystemsC. Yang, C. Hawblitzel, S. Lu

Generated on January 6, 2026. Data sourced from arXiv.org.

评论