MS1: LLMs in Computer Aided Engineering (CAE): Sketch → CAD → Mesh → Solver
This minisymposium invites recent works on how multimodal large language models are enabling an end‑to‑end CAE pipeline that converts informal sketches into parametric CAD, infers constraints, automates meshing, and produces solver‑ready inputs, advances that promise to shorten design cycles and scale human-AI co‑design. This minisymposium focuses on the recent technical milestones and practical implications of applying multimodal LLMs across the CAE pipeline: from freehand sketch interpretation and parametric CAD generation to automated constraint inference, mesh generation, and preparation of solver‑ready models. Recent work such as CAD‑LLM demonstrates that fine‑tuned models can complete and generate parametric CAD sketches, showing measurable improvements on sketch‑completion benchmarks and suggesting that generative models can accelerate routine modeling tasks. New research introduces solver‑aided, hierarchical domain‑specific languages that offload spatial reasoning to geometric constraint solvers, enabling LLMs to plan and assemble complex geometry while preserving engineering validity. Multimodal LLMs trained or fine‑tuned for 3D CAD tasks (e.g., LLM4CAD) have begun to show capability in producing three‑dimensional geometry from combined text and visual prompts, pointing to practical pipelines that bridge design intent and manufacturable geometry. Reviews and surveys in manufacturing journals report growing evidence that generative AI can enhance design‑to‑manufacturing workflows, improving iteration speed and enabling new forms of rapid prototyping and customization across aerospace, automotive, and industrial design domains. Key impacts include the potential to compress days of pre‑processing into hours, reduce manual handoffs that introduce errors, and improve consistency between designer intent and simulation inputs; however, these gains depend on robust constraint alignment, mesh quality metrics, and solver convergence checks. The minisymposium will emphasize reproducible evaluation, including benchmark datasets, metrics for design‑intent fidelity, mesh quality, and solver convergence, and will surface practical considerations such as dataset provenance, IP and licensing implications for learned CAD patterns, and domain‑specific validation or certification requirements. Attendees will gain a concise picture of where research is already producing deployable tools, where solver‑assisted DSLs and multimodal fine‑tuning are closing critical gaps, and what remains to be solved to move from promising demos to production‑grade CAE automation.
MS2: Large Multimodal AI models in Computational Science and Engineering
This minisymposium highlights how large multimodal models are being applied to core computational-mechanics tasks (model discovery, surrogate modeling, inverse problems, and experimental–simulation fusion), emphasizing reproducible benchmarks, domain-aware architectures, and validation against physics-based criteria. This minisymposium examines the emerging role of large multimodal AI models, systems that jointly process text, images, sensor streams, and structured scientific data, in advancing computational mechanics workflows, from constitutive-law discovery and reduced-order modeling to data-driven solvers and uncertainty quantification. Recent surveys and reviews document rapid progress in multimodal architectures and their cross-domain performance, noting both novel capabilities (e.g., combining visual and textual context to interpret experimental setups) and persistent challenges (robustness, generalization, and physics consistency). In computational mechanics specifically, multimodal models enable fusion of experimental imagery, sensor time series, and simulation outputs to improve model calibration, detect anomalies, and accelerate inverse problems where traditional optimization is costly or ill-posed. Scoping reviews in mechanical engineering highlight early successes in using language models for documentation, code generation, and design reasoning, while also pointing to the need for domain-specific fine-tuning and physics-aware loss functions to avoid spurious or nonphysical predictions.
Key technical directions include: (1) physics-informed multimodal pretraining that embeds conservation laws and symmetry priors into model architectures; (2) surrogate and operator learning that map multimodal inputs to fast, accurate approximations of PDE solvers; (3) inverse and data-assimilation pipelines that combine learned priors; and (4) human-AI workflows where models translate experimental observations into simulation setups or suggest targeted experiments to reduce uncertainty.
MS3: Scalable Bayesian Models for Trustworthy AI in Mechanics
The MS invites abstracts that describe scalable Bayesian inference or approximation techniques and provide concrete evidence of their utility in mechanics. Submissions are expected to present novel inference strategies such as scalable variational inference, sparse Gaussian process models, structured posterior approximations, or hybrid sequential Monte Carlo and Hamiltonian Monte Carlo samplers, and to demonstrate how these approaches scale to high‑dimensional parameter spaces and large datasets while preserving calibration. Ideal contributions will include quantitative evaluations of calibration (distinguishing measurement uncertainty from model uncertainty), robustness under distribution shift, and comparisons of computational cost versus uncertainty fidelity, together with integration examples that propagate uncertainty through finite element models or reduced order surrogates for inverse problems, model calibration, reliability analysis, or risk‑aware design.
Recent methodological work shows practical routes to scalable posterior simulation and fast uncertainty estimation beyond simple ensemble heuristics, and evaluations in related fields highlight the importance of decomposing predictive uncertainty into aleatoric and epistemic components for safety‑critical applications. Engineering‑focused studies demonstrate that approximate Bayesian schemes can be applied to model updating under observational uncertainty and that two‑step variational and Monte Carlo hybrids can improve parameter estimation in time‑series and structural identification tasks. Submissions that provide reproducible artifacts, open code, data, or benchmark scripts, or that clearly state compute requirements and deployment pathways will be prioritized, as will papers that address communication of uncertainty to practicing engineers and regulatory bodies and the implications for validation and certification in safety‑critical domains; practical adoption depends on methods that are both computationally tractable and demonstrably reliable, and recent surveys and tutorials emphasize the need for standardized benchmarks and transparent reporting to avoid overconfidence from poor approximations. Contributions that explicitly evaluate failure modes, propose mitigation strategies (for example, physics‑informed priors that improve sample efficiency and generalization), or quantify solver and surrogate error through probabilistic numerical techniques are particularly valuable because they directly support the transition from prototype demonstrations to auditable, certifiable tools for mechanics applications.
MS4: Symbiosis of Generative AI for Computational Models for Process and Geometry Optimization
This minisymposium explores how generative artificial intelligence and physics‑aware computational models form a symbiotic design loop that accelerates process optimization and geometry optimization, enabling rapid exploration of high‑performance designs, automated surrogate construction, and closed‑loop experiment‑informed refinement. This minisymposium highlights the emerging integration of generative artificial intelligence with computational modeling to drive process optimization and geometry optimization across engineering domains, highlighting methods that couple learned generative priors with physics‑based solvers to produce manufacturable, high‑performance designs. Topics of interest include generative approaches for topology and shape synthesis that produce diverse, constraint‑satisfying candidates; data‑efficient surrogate models that replace expensive simulations while preserving fidelity for optimization loops; differentiable and gradient‑aware pipelines that enable end‑to‑end optimization from manufacturing parameters to performance metrics; and closed‑loop frameworks that fuse experimental data, sensor streams, and simulation outputs to refine generative priors and reduce uncertainty. Emphasis is placed on practical evidence: demonstrations of speedups in design iteration (for example, orders‑of‑magnitude reductions in wall‑clock time when high‑quality surrogates replace repeated full simulations), quantitative comparisons of generative methods against classical topology optimization baselines, and case studies showing manufacturability and postprocessing requirements for additive and subtractive manufacturing. Submissions should present measurable outcomes such as improvement in objective metrics, reduction in computational cost, robustness to manufacturing tolerances, and validation against physical experiments or high‑fidelity solvers. Contributions that address constraint handling, including geometric constraints, multi‑physics coupling, and process‑specific limits, are particularly relevant, as are methods that incorporate uncertainty quantification so that optimized designs include credible performance intervals under variability in material properties or loading conditions. Attention to dataset provenance, reproducibility, and licensing of training data is important because learned generative priors can reflect biases from proprietary or synthetic sources; abstracts should therefore state data sources and any steps taken to ensure generalization. Practical deployment considerations such as integration with existing computer aided design and manufacturing toolchains, compute and memory requirements for training and inference, and strategies for human-machine collaboration in design review and certification are encouraged. Papers that demonstrate hybrid approaches—for example, combining physics‑based solvers with learned proposal distributions or using generative models to seed classical optimizers, will be valued for showing how generative artificial intelligence can complement rather than replace established optimization methods. Overall, the minisymposium seeks contributions that move beyond isolated demonstrations to provide reproducible evidence that generative artificial intelligence, when tightly coupled with computational models and process knowledge, can materially improve optimization throughput, design diversity, and manufacturability in real engineering workflows.
MS5: Foundation Models in AI for Computational Mechanics
This minisymposium highlights advances in quantum computing, neuromorphic architectures, and on-device artificial intelligence that are reshaping computational mechanics by enabling new tradeoffs among latency, energy consumption, and fidelity; recent theoretical and experimental work suggests that quantum algorithms can offer asymptotic speedups for certain large linear-algebra kernels and eigenvalue problems relevant to finite-element and spectral solvers (while practical, noise-robust implementations remain an active research challenge), neuromorphic spiking-neural systems demonstrate orders-of-magnitude reductions in energy per inference for streaming sensor fusion and real-time control tasks, and embedded hardware accelerators and model compression techniques make physics-aware surrogate models feasible at millisecond latencies on edge devices for monitoring, digital twins, and closed-loop control; submissions should present concrete evidence such as benchmarked speedups, energy and latency measurements, error and stability analyses when replacing or augmenting classical solvers, and case studies that address co-design of algorithms and hardware, robustness to noise and distribution shift, programming and tooling challenges, and pathways to integrate these emerging platforms with established simulation pipelines and validation regimes for safety-critical applications.
MS6: Mathematical Foundations, Algorithms, and Applications of Engineering Software 3.0
This minisymposium emphasizes applied advances that turn next‑generation mathematical and algorithmic ideas into deployable engineering software for materials design, manufacturing, battery materials discovery, nuclear engineering, and biomedical applications; topics of interest include physics‑aware surrogate modeling and reduced order methods that can cut simulation cost by orders of magnitude, differentiable simulation and gradient‑based optimization for end‑to‑end design, probabilistic numerical methods and scalable Bayesian inference for calibrated uncertainty quantification, generative modeling and high‑throughput screening pipelines that accelerate materials discovery and candidate selection, and digital twin and on‑device inference strategies for real‑time monitoring and control in manufacturing and medical devices. Submissions should emphasize concrete, reproducible outcomes, measured reductions in wall‑clock time or computational cost, improvements in design objective metrics, validation against high‑fidelity solvers or physical experiments, and clear statements of dataset provenance and compute requirements, and should demonstrate how mathematical choices (for example, discretization, solver preconditioning, or prior specification) affect downstream engineering performance, manufacturability, and certification pathways; contributions that bridge theory and practice by providing open code, benchmark scripts, or experimental validation in domains such as battery electrode screening, additive manufacturing process optimization, reactor safety analysis, or patient‑specific implant design are especially welcome because they illustrate how Engineering Software 3.0 can move from promising algorithms to auditable, production‑grade tools.
MS7: Machine Learning Frontiers in Multiscale, Materials, and Process Modeling
To be updated.
MS8: AI-Empowered Additive Manufacturing
This minisymposium explores the transformative intersection of artificial intelligence (AI) and advanced manufacturing, specifically focusing on how AI-empowered modeling and simulations are accelerating the evolution of Additive Manufacturing (AM). As manufacturing systems become increasingly data intensive and autonomous, AI methodologies including Generative AI (GenAI), Large Language Models (LLMs), and Vision Language Models (VLMs), are reshaping design, process control, and qualification frameworks. The focus is on integrating physics-based modeling with data driven intelligence to enhance predictive capability, scalability, quality assurance, and sustainability for AM. Topics include AI enhanced digital twins that leverage high fidelity and reduced order simulations for real time process monitoring, detection and mitigation, process control, and qualification during fabrication. Hybrid multiphysics and surrogate modeling frameworks enabling accelerated prediction and uncertainty quantification are of particular interest. Contributions addressing multimodal foundation models, edge computing, and cloud enabled analytics for adaptive control and predictive maintenance are encouraged. The minisymposium also highlights generative approaches for multimaterial systems, functional gradients, manufacturability aware design, and LLM supported human AI collaboration in design and process planning. By integrating sensing, explainable AI, and advanced computational frameworks, this session aims to advance resilient, trustworthy, and high-performance manufacturing systems grounded in rigorous AI-empowered simulation methodologies.
MS9: AI and Digital Twins for Natural Hazards and Geosystems
Many of today’s societal needs such as mitigation of natural hazards, energy and environmental sustainability, development of resilient civil infrastructure, and accessing natural resources require studying the physical properties and processes of the Earth and geophysical systems across all scales from both scientific and technological perspectives. Moreover, natural hazards such as landslides, earthquakes, floods, and wildfires pose major threats to human lives and the critical infrastructure. Machine learning techniques and digital twin technology offer great potential for advanced management of geosystems and natural hazards.
This mini-symposium aims to provide a forum to discuss recent advances in applications of Artificial Intelligence and Digital Twins to enhance monitoring and assessment of geosystems and infrastructure, as well as prediction and mitigation of natural hazards. The topics of interest include, but are not limited to:
Data-driven and physics-informed modeling of geosystems and natural hazards across scales
Advanced numerical modeling techniques for geosystems and natural hazards
Advances in sensing and monitoring techniques
Geohazards prediction and assessment
Data analytics in geosystems application
High-performance computing for digital twins
Reduced order modeling
Inverse modeling techniques
Probabilistic forecasting of natural hazards
Real-time assessment and monitoring of structures and infrastructure
Infrastructure maintenance and retrofitting
MS10: AI for Faster, More Reliable Engineering Simulation Workflows (Misc track)
This minisymposium invites contributions that advance the infrastructure and acceleration of computational engineering workflows—focusing on methods that reduce time‑to‑answer, improve robustness, and make simulation pipelines easier to run, scale, and trust across domains such as structures, fluids, thermal, manufacturing, and multiphysics. We welcome work on runtime and memory reduction, GPU/HPC/cloud scalability, convergence and reliability improvements, failure prediction and mitigation, automated solver and workflow configuration, adaptive‑fidelity strategies, and acceleration of multi‑query workloads including parameter sweeps, UQ, optimization loops, and real‑time updates. We also encourage deployment‑oriented research involving integration into CAE toolchains, benchmarking, reproducibility, and best practices. The goal is to highlight AI‑enabled advances that strengthen the simulation backbone of modern engineering practice by delivering measurable gains in performance, stability, and usability.