Special Session Ⅰ: Language Intelligence and Data Storage Revolution: The Role of Natural Language Processing in Solid-State Storage/Embedded Systems
Session Chair: Assoc. Prof. Jinhua Cui & Assoc. Prof. Meng Zhang——Huazhong University of Science and Technology, China
Session Co-Chair: Asst. Prof. Shiqiang Nie——Xi'an Jiaotong University, China
Special Session Information:
With the rapid advancement of information technology, we find ourselves amidst the tidal wave of the information age. Solid-state storage systems and embedded technologies have become integral components of modern computing systems, while the advancements in natural language processing (NLP) continue to shape the way we handle data. This special session aims to explore how NLP plays a pivotal role in core technologies of this digital world and its applications and challenges within solid-state storage systems and embedded systems.
Topics of interest include but are not limited to:
Data management and optimization strategies driven by language intelligence
The role of NLP in enhancing the performance of solid-state storage systems
The applications of speech recognition and natural language interaction in embedded systems
Intelligent recommendation and control technologies based on NLP in embedded systems
Data mining and analysis in embedded systems powered by NLP techniques
Compression, encoding, and storage optimization of textual data
Data security and privacy protection techniques based on NLP
Applications and challenges of NLP in ensuring security in embedded systems
Important Dates:
Abstract Submission: May 24, 2025
Full Paper Submission: May 31, 2025
Special Session Ⅱ: Extract Information from Multimodal Documents with Handcrafted Languages
Session Chair: Prof. Xiwen Zhang——Beijing Language and Culture University, China
Key Words: Multimodal documents, handwritten text, speech recognition, language models, pattern recognition, information extraction
Special Session Information:
Human languages can be communicated by speech from speaking and text from writing. Most of them are inherited from handwritten text. Most of handwritten text are mixed with drawings and paintings to present more information. Most speech are mixed with video. The multimodal documents can be analyzed, recognized, understanded by pattern recognition, machine learning, and deep learning. More and more large language models, vision models, and language-vision models, and multimodal models are used to extract more information from the multimodal documents.
By hosting this session, we look forward to inspiring more valuable scientific discoveries, accelerating the development of multimodal perception, analysis, and generation technologies for handcrafted text and speech, and helping them achieve wider and deeper social language and text applications.
Topics of interest include but are not limited to:
Recognizing multimodal documents
Understanding multimodal documents
Foundation models for multimodal documents
Important Dates:
Abstract Submission: May 24, 2025
Full Paper Submission: May 31, 2025
Special Session Ⅲ:Research on Information Retrieval and Recommendation Systems Driven by Large Language Models
Session Chair: Assoc. Prof. Bo Xu——Dalian University of Technology, China
Key Words: Information Retrieval, Recommendation Systems, Social Media Processing, Large Language Models
Special Session Information:
This special issue aims to bring together innovative research efforts on information retrieval and recommendation systems empowered by large language models. Submissions that explore the application of large language models in enhancing the efficiency and effectiveness of information retrieval processes are welcome, including but not limited to improving search algorithms, handling complex queries, and enhancing document ranking. Regarding recommendation systems, we highly encourage research on leveraging large language models to generate personalized recommendations, deeply understand user preferences, and optimize item-user matching. Work related to social media processing is also within the scope of this special issue. We welcome both theoretical investigations and practical application cases that showcase innovative uses of large language models in information retrieval and recommendation systems.
Topics of interest include but are not limited to:
Large Language Models
Information Retrieval
Recommendation Systems
Intent Understanding
User Preference Modeling
Personalized Recommendation
Important Dates:
Abstract Submission: May 24, 2025
Full Paper Submission: May 31, 2025
Special Session Ⅳ:Event Extraction, Understanding, and Reasoning
Session Chair: Assoc. Prof. Wei Liu——Shanghai University, China
Key Words: Textual Event Extraction, Event Understanding, Event Reasoning, Event Knowledge Graph
Special Session Information:
This session focuses on the crucial aspects of textual event extraction, understanding, and reasoning within the domain of natural language processing. Event extraction aims to identify event triggers, types, arguments, and their roles from unstructured text, which is fundamental for building structured event knowledge base. However, current methods face challenges such as low accuracy in low - resource scenarios and open - domain extraction and high labor - intensity in pattern - based approaches.
Event understanding delves into semantic analysis of events, including comprehending event semantics, relationships between events, and the context in which they occur. Existing research has limitations in fully capturing complex semantic relationships and handling diverse event types.
Event reasoning involves inferring new event - related knowledge based on existing events, such as temporal, causal, and logical relationships. Present reasoning techniques struggle with complex reasoning scenarios and lack sufficient interpretability.
Topics of interest include but are not limited to:
Textual event extraction (including event, event argument, and event relationship extraction)
Event extraction based on multi-modal scenarios
Event reasoning and prediction
Important Dates:
Abstract Submission: May 24, 2025
Full Paper Submission: May 31, 2025
Special Session Ⅴ: LLM in Action: Practical Implementations in Vertical Domains: Smart Cities, Digital Twins, Logistics, and Healthcare
Session Chair: Assoc. Prof. Shengfa Miao——Yunnan University, China
Session Co-Chair: Assoc. Prof. Shuangyin Li——South China Normal University, China
Key Words: LLM Industry Applications, Temporal-spatial LLM, Data Scarcity, Latency Constraints, Multimodal Data
Special Session Information:
While large language models (LLMs) have demonstrated groundbreaking capabilities, their real-world deployment in vertical domains remains a complex, multi-faceted challenge. This session shifts focus from theoretical advancements to practical LLM implementations, spotlighting successful use cases, deployment bottlenecks, and scalable solutions in smart cities, healthcare, logistics, and digital twins. We invite researchers and industry practitioners to share insights on overcoming domain-specific barriers—such as data scarcity, latency constraints, regulatory compliance, Multimodal data, and user trust—while maximizing LLM value in mission-critical scenarios.
Topics of interest include but are not limited to:
The integration of Large Language Models (LLMs) with temporal-spatial AI
Research on Understanding and Inference of Multimodal Data with LLMs in Vertical Domains
Research on Data Generation for Vertical Domains using LLMs
Industry implementation cases of large language models
Important Dates:
Abstract Submission: May 24, 2025
Full Paper Submission: May 31, 2025
Special Session Ⅵ: Information Extraction with Large Language Model
Session Chair: Assoc. Prof. Chengjie Sun——Harbin Institute of Technology, China
Key Words: Information extraction, Large language model, Zero-shot learning, Named entity recognition, Relation extraction, Event extraction
Special Session Information:
Information extraction in new fields is always an important task in natural language processing. Its difficulty is mainly reflected in the lack of annotation data and annotation specifications. The methods based on pre-training can achieve satisfactory results when there is a large amount of labeled data, but it is poor in the absence of labeled data, and it is difficult to output new entity category tags. The emergence of large language model provides a new solution to the problem of information extraction in new fields. It can output new entity tags, but the accuracy of extracting by direct application of large language model is still far from the practical application requirements. This topic focuses on the new methods and technologies of information extraction based on large language models, and provides new technical support for the wide application of information extraction technology.
This topic focuses on the new methods and technologies of information extraction based on large language models, and provides new technical support for the wide application of information extraction technology.
Important Dates:
Abstract Submission: May 24, 2025
Full Paper Submission: May 31, 2025
Copyright ©www.clnlp.org 2025-2026 All Rights Reserved.