Call for Collaboration:Early Assessment, Monitoring, and Intervention Research on Alzheimer\'s Disease Based on the DIKWP Model and Artificial Consciousness Systems


Directory

1. Background and significance of the project

2. A review of the current status of research at home and abroad

3. Research objectives and research content

   3.1 Data Layer: Multimodal Data Acquisition and Signal Flow Design

   3.2 Information layer: task-driven and perceptual structure docking mechanism

   3.3 Knowledge layer: large model knowledge fusion and personalized cognitive reasoning module

   3.4 Intelligence layer: feedback adjustment and strategy evaluation system based on artificial consciousness

   3.5 Purpose layer: cognitive goal guidance and individual state reconstruction mechanism

4. The technical route and innovation points

5. Project team and basic conditions

6. Expected results and transformation paths


1. Background and significance of the project

**(1) Population aging and Alzheimer's disease challenges. **With the acceleration of the global population ageing, dementia diseases such as Alzheimer's disease (AD) have become a serious public health problem. There are currently more than 55 million people with dementia worldwide, with about 10 million new cases each year (WHO, 2023), of which Alzheimer's disease is the main type. AD is characterized by progressive deterioration of memory and cognitive function, which not only seriously affects the quality of life of patients, but also places a heavy burden on families and the healthcare system. More worryingly, late diagnosis or misdiagnosis often occurs in clinical practice. Therefore, how to achieve early assessment (identify mild cognitive impairment or early AD before the onset of obvious dementia symptoms), continuously monitor the process of cognitive decline, and implement effective interventions to delay the course of the disease have become urgent problems in the academic and clinical fields.

**and (2) the limitations of traditional cognitive screening and intervention models. **At present, clinical assessment of cognitive decline mainly relies on scales such as the Mini-Mental State Examination (MMSE) and the Montreal Cognitive Assessment (MoCA), but these tools can usually only provide cross-sectional and coarse-grained cognitive function assessments, and have limited sensitivity to slight changes in the early stage of AD. For example, MoCA has a sensitivity of only about 74% for mild cognitive impairment (MCI) at commonly used cut-off scores. At the same time, the results of cognitive screening are easily affected by education level and subjective state, and there is a certain bias. In addition, the traditional model tends to follow a passive chain of "symptom onset – medical visit – symptomatic treatment". Patients are usually detected and intervened after a significant decline in function, at which point the optimal window for intervention may have been lost. In terms of intervention, standardized cognitive training or drug therapy was mostly used in the past, and there was a lack of real-time adaptation and adjustment for individuals. For example, fixed-difficulty memory exercises or general-purpose drug regimens cannot dynamically adapt to the patient's daily fluctuations in state and specific deficit structures. This passive, static model is difficult to meet the active, continuous, and individualized management needs of chronic progressive diseases such as AD.

and (3) the introduction of new opportunities for artificial intelligence and **brain-computer interfaces. **In recent years, the emergence of artificial intelligence technology has brought new hope for the early detection and intervention of cognitive impairment. For example, brain-computer interface technologies such as electroencephalography (EEG) and near-infrared brain imaging (fNIRS) can provide objective indicators of neural activity in the brain and have been shown to be helpful in the early detection of biomarkers of Alzheimer's disease. Studies have shown that EEG brain wave analysis can be used to distinguish healthy elderly from MCI/AD patients, as a valuable early diagnosis tool, and can be integrated into a multimodal diagnostic system. As a means of monitoring blood oxygen changes in the cerebral cortex, fNIRS has the advantages of being safe, portable, and relatively inexpensive. It has been experimentally demonstrated that combining cognitive training with cognitive training to provide real-time feedback on brain activity (i.e., neurofeedback) is expected to improve cognitive functions such as working memory in elderly MCI populations. In addition, advanced AI algorithms such as large models (LLMs) have shown potential in medical text analysis and cognitive state prediction. For example, studies have used large language models to analyze electronic medical record clinical notes to detect early signs of cognitive decline. The results show that LLM has good performance in identifying narrative cues in patients with early MCI, which complements traditional machine learning. On the other hand, virtual reality (VR) cognitive training can improve the participation and training effect of elderly patients through immersive interactive tasks. Experiments have proved that VR cognitive training can significantly improve the overall cognitive function of the elderly with mild cognitive impairment, especially in memory, attention, and executive function. These advances suggest that the combination of multimodal biological signal acquisition, intelligent model analysis, and immersive training is expected to build a closed-loop "monitoring-intervention-remonitoring" system to achieve active management of cognitive decline.

**(4) There is an urgent need for a unified model of cognitive degradation mechanism. **However, at present, most of the above-mentioned new technologies are applied independently of each other, and there is a lack of a unified framework to effectively integrate them, so there are obvious structural defects and processing bottlenecks. For example: (1) Brain-computer interfaces provide rich data, but how to transform massive and heterogeneous sensor data into meaningful "information" and "knowledge", and interpret them in combination with patients' specific cognitive tasks? Existing systems often stay at signal classification or simple indicator calculations, and lack an overall understanding of the cognitive state of the brain. (2) The large model has strong pattern recognition and knowledge expression capabilities, but its reasoning process is a "black box", without explicit purpose driven, and it also lacks the ability to directly process real-time physiological data. Large models can be hallucinogenic or biased, and there is no guarantee that the output will always meet medical safety requirements. (3) Cognitive screening and VR training are fighting separately: screening provides diagnosis but not continuity, and training provides intervention but insufficient feedback. The lack of closed-loop linkage between the two makes it difficult for training to dynamically optimize for the problems found in screening, and screening cannot evaluate the real-time effect of training. Information silos lead to opaque decision-making and are difficult to correct in time. Therefore, there is an urgent need for a new theoretical framework that can understand and simulate the whole process of "cognitive decline-intervention perception" from the system level, and integrate data collection, information processing, knowledge reasoning, intelligent decision-making, and purpose driving to create an adaptive intelligent closed-loop system.

**(5) The introduction and advantages of the DIKWP artificial consciousness model. **In response to the above challenges, the team of Professor Duan Yucong of Hainan University proposed the DIKWP artificial consciousness model, which provides a new framework for understanding and coping with cognitive deterioration in the elderly. DIKWP is a new cognitive model based on the classical DIKW (pyramid) model with a "Purpose/Intention" layer. The core idea is to divide the cognitive process of the agent into five levels: Data-Information-Knowledge-Wisdom-Purpose, and closely couple each layer through a network structure to achieve two-way feedback and iterative update。 In other words, DIKWP believes that intelligent systems must not only be able to extract information, form knowledge, and make intelligent decisions from data, but must also be driven and calibrated by higher-level intent. This is similar to how the human brain works: the brain processes information through neural networks, but human behavior and decisions are often driven by goals, motivations, and values. From the perspective of artificial consciousness, DIKWP provides a unified cognitive language and structure that can map every step of the machine's reasoning to a human-understandable semantic process. This brings three major advantages to the modeling of complex DM-intervention systems:

Global goal orientation: The introduction of the purpose layer ensures that the system is always operating around established cognitive health goals. For example, taking "delaying the decline of patients' memory" as the top-level purpose can guide the data collection and intervention decision-making at the lower level to serve this goal, so as to avoid the output of the AI model and ensure value alignment and security.

Multi-layer semantic integration: The DIKWP model establishes a clear semantic interface at each layer, so that the low-level data can be mapped to the high-level knowledge and intent. Rather than being separated, the layers form a dynamic loop through bidirectional semantic communication. This means that the physiological data captured by the sensor (such as brain waves) not only supports the judgment of cognitive state (information → knowledge) upward, but also the understanding and goal of the situation at the upper level will also affect the interpretation and attention of the data (intention→ intelligence→...). → data) to overcome the rigidity of traditional one-way processes. The flow of data-information-knowledge-intelligence-purpose constitutes a closed-loop system, which makes the system have human-like self-adjustment capabilities.

Artificial Consciousness Characteristics: DIKWP is essentially a simulation of the mechanisms of human consciousness. Through the combination of information processing and intention-driven, it simulates functions and characteristics similar to human consciousness in artificial systems. For example, the system not only passively responds to input data, but also has "subjective intent" like a human, actively choosing which information to focus on, what knowledge to use to reason, and reflecting and adjusting its own decisions. This gives the system autonomy and adaptability, which is particularly suitable for coping with highly individualized and dynamic changes in the cognitive state of the elderly.

In summary, the artificial consciousness system based on the DIKWP model is expected to break the limitations of traditional methods and build an intelligent closed-loop framework based on data, supported by knowledge, and centered on intent. It can transform the multimodal data collected by the brain-computer interface into a deep understanding of the patient's cognitive state, and continuously optimize the intervention strategy driven by artificial consciousness, achieving a major leap in the modeling of the "cognitive decline-intervention perception" system. This has important theoretical significance and application value for improving the early assessment and intervention effect of Alzheimer's disease and reducing the social burden.

2. A review of the current status of research at home and abroad

Focusing on the theme of this project, this section reviews the current research status at home and abroad from four aspects: brain-computer interface, large model, cognitive screening and virtual reality cognitive training, analyzes the structural defects and bottlenecks of existing methods, and clarifies the new breakthroughs provided by the DIKWP artificial consciousness model.

2.1 Brain-computer interface and multimodal cognitive assessment: Brain-computer interface (BCI) technology has attracted extensive attention in the detection of cognitive impairment in recent years. Among them, the non-invasive means represented by electroencephalography (EEG) are regarded as promising tools for early detection of Alzheimer's disease due to their low cost and high temporal resolution. Foreign studies have shown that the power and network connection indicators of some frequency bands of EEG are significantly different from those of healthy elderly patients with MCI and early AD, which can be used as biomarkers to assist diagnosis. For example, Ahmed et al. found that the accuracy of EEG algorithms combining time-frequency analysis and machine learning for AD diagnosis was significantly higher than that of random. Some scholars in China have used the complex network characteristics of EEG to predict the transformation trend of MCI to AD, and have achieved preliminary results. At the same time, functional near-infrared spectroscopy (fNIRS) is emerging, which reflects neural activity by monitoring local blood oxygen changes in the cortex. Compared with EEG, fNIRS is more anti-interference and portable and easy to use, especially suitable for cognitive research in the elderly population. Some cutting-edge work has tried to use fNIRS for working memory training feedback in MCI patients: a study conducted by Lee et al. allowed elderly people with MCI to receive cognitive training in a VR environment, and at the same time used fNIRS to measure their prefrontal lobe activation and real-time feedback, and the results showed that the training group had a significant improvement in the executive function test compared with the control group. This suggests that incorporating neurofeedback into cognitive training can help enhance efficacy. Overall, brain-computer interfaces provide a new way to objectively quantify cognitive state. However, there are still processing bottlenecks in their application: first, EEG and fNIRS data are high-dimensional and noisy, which vary significantly between different individuals and environments, and traditional data-driven methods are susceptible to overfitting and noise. Second, a single modality is often not enough to fully reflect the cognitive process, and multimodal data fusion is required, but this brings complexity to the algorithm and computation, and the optimal strategy for EEG-fNIRS-behavioral multimodal fusion is still inconclusive. Third, the existing systems mostly focus on offline analysis and diagnosis, and lack real-time interpretation and feedback mechanisms. For example, an EEG classification model may give results such as "MCI risk", but cannot explain exactly which cognitive functions are impaired and what interventions should be taken. It can be seen that a structured framework is needed in the field of brain-computer interface to associate low-level signal processing with high-level cognitive meaning, open up the "sensory-cognitive" chain, and integrate it with intervention methods in a closed loop.

2.2 Large models and intelligent diagnosis: After making breakthroughs in the field of natural language processing, large-scale pre-trained models (such as LLMs such as GPT) have been gradually introduced into the medical and health field. In the study of cognitive impairment, there are two main types of applications of large models: first, textual/linguistic analysis. Cognitive impairment is often accompanied by subtle changes in language ability, and researchers use pre-trained language models to analyze written or spoken text to detect signs of cognitive decline. For example, a study at Massachusetts General Hospital in the United States evaluated the performance of LLMs such as GPT-4 in identifying clues to cognitive decline in clinical notes in electronic health records (EHRs). By analyzing the fragments of physician records close to several years before the diagnosis of MCI, it was found that LLM was able to infer the cognitive state of patients from the few words that describe their daily life and memory through prompt learning, which complemented traditional neural network models. This suggests that LLMs have the ability to warn of cognitive problems in advance from unstructured text. Second, multimodal data fusion. Some scholars have explored the combination of LLM with brain imaging and genetic data to achieve cross-modal AD risk prediction. For example, a South Korean study using an LLM architecture that fused neuropsychological testing and MRI metrics to predict the risk of conversions to AD within two years in patients with MCI was reported to be significantly more accurate. The advantage of large models is that they have a wide range of knowledge coverage and strong reasoning ability, and are expected to act as an "expert system" or assistant in the cognitive field. However, its application also faces structural drawbacks: First, large models are inherently lacking in transparency and explainability, especially in medical contexts, where black-box decision-making is difficult to win the trust of doctors and patients. Second, generic large models may not be suitable for medical expertise, and direct use in diagnosis may lead to absurd conclusions (i.e., "hallucinations") or bias in the face of special populations. Thirdly, the current LLM mainly processes offline data and lacks an interface with real-time sensing data. There is also a lack of an active feedback mechanism to adjust the reasoning process according to the goal. In short, large models are still in the exploratory stage in the application of cognitive impairment, and they need to be combined with domain knowledge and feedback control to play a greater role.

2.3 Cognitive screening and digital assessment: Traditional cognitive screening methods, such as MMSE, MoCA and other scales, have been used for decades, and have been verified by large samples to form scoring categories and clinical decision-making criteria. In recent years, digital technology has gradually been recognized at home and abroad to improve the sensitivity and convenience of screening, and a variety of digital cognitive assessment schemes have emerged. For example, use a smartphone app to complete a cognitive test and upload the results for remote assessment. Through the computerized cognitive battery, the response time of the subject is captured under meticulous timing. Numerical representations were even extracted from data on the subject's daily life (e.g., voice, keyboard typing patterns) to predict cognitive risk. However, the current review shows that these new methods still do not fundamentally change the fragmented and static nature of screening: most digital screenings simply use electronic scales, are tested infrequently (usually every few months/years), and the outcome measures are still limited to continuously monitor subtle changes in the cognitive curve. At the same time, there is a disconnect between screening and intervention: there is a lack of targeted follow-up after the assessment, often requiring a referral to a physician or alternative training. This means that the role of screening is to "find problems" rather than "solve them". Some studies at home and abroad have tried the closed-loop screening-training model. For example, a start-up company in the United States has developed a game-based cognitive test, and if it detects a user's attention loss, it immediately recommends a corresponding brain training game to exercise attention. There are also teams in China that are exploring the combination of "mini program self-test + music therapy/cognitive game". However, most of these attempts stay in the logic of the application layer, lack of unified theoretical guidance, and the effect evaluation is not sufficient. Bottlenecks include: How can digital screening results be reliably mapped to clinical cognitive function? How to control the learning effect and subject motivation of multiple repeated tests? How can you choose an individualized intervention based on screening results rather than a "one-size-fits-all"? These problems show that the traditional screening paradigm alone cannot meet the needs of refined management, and a more intelligent evaluation system and intervention system need to be deeply integrated.

2.4 Virtual Reality and Cognitive Function Training: VR technology has attracted attention in the field of cognitive function rehabilitation due to its immersive and interactive advantages. Many studies have confirmed that VR cognitive training is more interesting and engaging than plane cognitive exercises, and can improve training compliance in elderly patients. Teams from abroad, such as South Korea and Canada, have developed VR games suitable for the elderly with MCI to practice memory, navigation, classification and other abilities, and have seen some improvement in standard cognitive tests after short-term intervention. Some scholars in China have also developed a "VR+somatosensory" cognitive rehabilitation system, which allows patients to complete daily tasks (such as supermarket shopping, bus rides) through virtual scenes to train their executive function and spatial memory. Some senior living communities in Beijing have piloted the introduction of VR memory training games into elderly care activities. These efforts have shown that VR is superior to traditional paper-and-pencil training in terms of ecological validity—patients are more likely to migrate to real-life situations with improved exposure to realistic situations. However, there are common structural defects in current VR cognitive training: First, there is insufficient personalization. Most VR training content is fixed, and it is difficult to dynamically adjust according to the cognitive weaknesses and progress speed of different patients. Although some systems provide difficulty selection, they often need to be adjusted by human judgment and lack intelligent adaptability. The second is the lack of physiological and cerebral feedback. The training process mainly collects the completion of behaviors, and there is no way to know the brain load and attention of patients during training. If the training task is too easy or too difficult, the system cannot sense and adjust the strategy in time. Third, the effect evaluation lags behind. The effect is usually evaluated by cognitive tests after the completion of the first phase of training, and this post-hoc evaluation cannot guide real-time optimization in the process. It can be seen that in order to achieve the maximum benefit, VR training needs to be combined with real-time monitoring and evaluation systems to form a closed loop of "stimulus-response-evaluation-improvement". This is precisely where the bottleneck of the current research lies, and it is also where this project intends to break through.

2.5 Comprehensive analysis of existing research methods: In summary, a large number of achievements have been made in the field of cognitive impairment detection and intervention at home and abroad, but in general, most of the studies are still limited to a single level or a one-way process: either focusing on signal acquisition and pattern recognition (data/information layerEither emphasizing the means of intervention and effect evaluation (wisdom level practice), there is a lack of a holistic view throughout. For example, a review pointed out that the current use of artificial intelligence in dementia care requires a combination of bottom-up and top-down approaches. The DIKWP artificial consciousness model is proposed to fill this theoretical gap. It connects data, information, knowledge, wisdom and purpose from the perspective of systems theory and consciousness, which can be regarded as a paradigm shift in the application of artificial intelligence in cognitive impairment. As Professor Duan Yucong pointed out, "By embedding the key layer of 'purpose' inside the model, we can not only make AI smarter, but also ensure that it always serves human values and safety needs." This means that the DIKWP model provides a new theoretical breakthrough: academically, it constructs a unified five-layer semantic architecture, integrating perception, cognition, decision-making, and goals that were previously separated; In terms of application, it provides an innovative way to solve the "black box" problem of the current large model, the interpretation problem of BCI, and the personalized problem of intervention. It is foreseeable that the application of the DIKWP artificial consciousness model to the early assessment and intervention of Alzheimer's disease is expected to form a new generation of intelligent diagnosis and treatment system and lead the breakthrough and development of this interdisciplinary field.

3. Research objectives and research content

The overall goal of this project is to build an intelligent assessment, monitoring and intervention system for Alzheimer's disease (AD) driven by artificial consciousness based on the DIKWP model。 The system can perceive the cognitive state of patients at multiple levels, integrate the knowledge of large models for intelligent reasoning, formulate and adjust intervention strategies under the framework of artificial consciousness, and realize the comprehensive assessment, continuous monitoring and personalized intervention closed-loop of early AD population. In order to achieve this goal, the project will refine the research content into five modules according to the five-layer structure of DIKWP, and form a systematic research route from the "data layer" to the bottom layer and the "purpose layer" as the top layer. The research content and sub-objectives of each layer are as follows:

3.1 Data Layer: Multimodal Data Acquisition and Signal Flow Design

Sub-objectives: Build a collection system covering cognitively relevant multimodal signals, formulate data stream processing specifications, and provide reliable and rich raw data input for the upper layer.

Research Contents:

Multimodal physiological and behavioral data collection: The data layer focuses on the capture of the patient's objective state of existence, which is equivalent to the "senses" of the artificial consciousness system. This project will integrate a variety of sensing channels, including: (1) Electroencephalography (EEG): Recording the electrical activity of the brain using high-density wearable EEG devices, focusing on EEG rhythms related to memory and attention (e.g., theta waves, α wave power) and functional connectivity indicators to reflect the functional status of neural networks. (2) Functional near-infrared (fNIRS): A portable fNIRS device is used to monitor the hemodynamic changes in brain regions such as the frontal lobe to evaluate the cerebral blood flow response under cognitive load such as executive function and working memory. The combination of the fNIRS signal with the EEG allows for complementary characterization of neural activity (one reflecting electrical signals and one reflecting blood oxygen). (3) Eye tracker: Infrared eye tracking collects information such as the subject's fixation point and pupil diameter. Eye movement features (e.g., fixation duration, saccade path) in cognitive tasks are a direct reflection of attention and information processing efficiency, and can assist in identifying early attention disorders. (4) Physiology and behavior: integrated heart rate variability (HRV) sensors, somatosensory accelerometers, etc., to record indicators related to cognitive status such as emotional agitation and slow movements. The whole system is optimized in terms of age-friendliness, such as the use of all-in-one wearable devices and wireless synchronous data collection, to ensure the naturalness and continuity of data acquisition.

Synchronous design of task scenarios and data: In order to give meaning to the collected data, this project will design a series of standardized cognitive task scenarios for subjects to perform multimodal data collection when completing tasks. Tasks include: memory (e.g., word list recall), attention (e.g., target stimulus monitoring), executive function (e.g., n-back working memory task), etc. These tasks can be presented in a computer or VR environment and the difficulty is automatically adjusted according to the subject's ability. The project will establish a time synchronization mechanism to accurately align task events (stimulus presentation, participant responses) with EEG, fNIRS, eye movement and other signals to generate annotated multimodal time series data. For example, in the n-back task, the time point of each stimulus and the time of the subject's key press will be recorded and associated with the corresponding brain signal fragment for subsequent analysis.

Signal Pre-Processing & Flow Control: Develop a unified data stream processing pipeline that includes: signal denoising (filtering, independent component analysis de-artifact), artifact removal (e.g., blinking, head movement artifact detection), data normalization, segmentation, and feature extraction. The project will develop online signal quality monitoring and pre-processing algorithms based on real-time processing requirements to ensure the reliability and stability of the data received by the upper layer. At the same time, the flow path of data within the system is planned: for example, EEG and fNIRS data are preprocessed and feature extracted on local edge computing devices, and then uploaded to a central server; Eye tracking and behavioral data is fed directly into the information layer module. Each data stream adopts the publish-subscribe mechanism, and dynamically adjusts the sampling rate and upload frequency according to the requirements issued by the purpose layer (for example, when the system purpose focuses on memory detection, increase the data sampling density of the channels related to the memory task). This kind of flow control realizes the response of the data layer to the purpose of the upper layer, so that the perception has a certain "selectivity", and reduces the transmission and processing load of irrelevant data.

Multi-modal data fusion interface: At the backend of the data layer, a multi-modal data fusion interface is established to align and store different types of data at the same time, so as to prepare for further processing in the information layer. For example, event-related potentials from EEG can be combined with fixation sequences of eye movements to form semantic-rich recordings (e.g., 0.3 seconds after stimulus X is presented, the subject produces a P300 wave and fixation is biased to the left). This interface also supports simple multimodal anomaly detection: if a sudden drop in data quality in one modality (e.g., electrode drop-off, signal loss) is found, it can notify the upper layer in time or trigger an alternate solution (e.g., temporarily relying more on other modalities). Through the above measures, the data layer will provide a comprehensive and robust perception foundation for the system, laying the foundation for intelligent processing at subsequent levels.

3.2 Information layer: task-driven and perceptual structure docking mechanism

Sub-objective: Transform the collected underlying data into meaningful information representations, and realize the initial perception of the subject's state and task-related semantic extraction.

Research Contents:

Signal Feature Extraction and Pattern Recognition: The information layer focuses on "interpretation of data", which is equivalent to processing and refining the original data into information units. This project will develop feature extraction algorithms and pattern recognition models for each modality at this layer, including: extracting spectral energy, relative power, coherence, event-related potential amplitude and other information from EEG that can indicate the stage of cognitive processes; Some features of the oxyhemoglobin curve extracted from fNIRS (e.g., the average rate of increase during the task period) indicate the level of activation of brain regions; Indicators such as average fixation duration and saccades were extracted from eye tracking data to reflect attention allocation; and task performance indicators such as reaction time and accuracy were extracted from behavioral records. These features will be fused with data, e.g. by concatenating simultaneous EEG and fNIRS features into a multimodal feature vector. Machine learning/deep learning models are then applied to perform pattern recognition on these features. Thanks to task annotation, the model can be trained as a classifier or regressor, such as to determine whether the current participant is normal or slightly impaired, or to estimate the current task load level. In particular, the project will develop real-time state detection algorithms, such as a cognitive load index based on EEG theta wave power, and an attentional fluctuation index based on saccade frequency, so that the system can have a quantitative understanding of the current cognitive state of the participant.

Task semantics and perceptual structure docking: The information layer not only processes data patterns, but also understands their meaning in combination with task contexts. To this end, we introduce task semantic modeling: establish a semantic description for each cognitive task, including the task goal (e.g., "memorize the words that appear in the list"), the step flow, the correct answer criteria, and so on. These descriptions constitute the "information structure" definition of the task. When the system is running, the information layer will connect the task structure with the perceptual data: for example, when a participant is detected to answer a question incorrectly in a memory task and the EEG theta rhythm is significantly increased, it can be interpreted as "memory retrieval failure under high cognitive load". This docking requires rules or simple inference mechanisms that map perceived patterns to task-specific semantic information. In this project**, ontology or knowledge graph is considered to represent the task structure and possible participant states. By linking "task-action-physiological response-possible cognitive state" in the knowledge graph, the semantic annotation of information is realized. For example, nodes represent "memory tasks", "wrong answers", "theta wave rises", etc., and connected edges represent logical relationships, such as "θ rises** and errors -> possible difficulties: insufficient working memory". Based on this, the information layer outputs semantically rich perceptual information, such as "the current working memory load is high, errors occur, and the memory ability is limited".

Multimodal information fusion and visualization: The comprehensive presentation of multi-source information is another function of the information layer. On the one hand, the fusion algorithm is used to carry out confidence weighting or logic rule synthesis on the detection results from different modalities to obtain the overall judgment of the participant's state. For example, if an EEG indicates a decrease in attention and frequent blinking of the eye movement corroborates fatigue, the combined output is "low attention and possibly fatigue". On the other hand, the information layer is responsible for making machine-interpreted information available to the intelligence layer, doctors, or patients in interpretable form. This involves information visualization and report generation techniques. We will develop real-time dashboards that display key metrics such as real-time cognitive load curves, attention index light graphs, and task performance summaries. At the same time, the information layer can generate a summary report after the task is completed, for example: "The average accuracy rate of the user in the memory task is 80%, which is 1 standard deviation lower than the norm; During this period, the theta wave was significantly elevated, suggesting that short-term memory storage is difficult." This information can be used for internal decision-making in the system and can also be used by experts for reference and validation.

Two-way interaction between the information layer and the data layer: The processing results of the information layer can in turn guide the data layer to tune and collect. For example, when the information layer detects "poor signal quality" or "uncertain state", it can request the data layer to increase the sampling rate or enable backup sensing (e.g., more reliance on fNIRS when there are many EEG artifacts). For another example, if the information layer identifies that the participant has difficulty responding to a certain type of stimulus, it can inform the data layer to focus on recording relevant data in the next task. Through this two-way interaction, a perceptual-interpretation cycle is formed: the data layer provides materials, and the information layer feeds back the needs, so that the low-level perception and high-level information extraction adapt to each other, and gradually improve the sensitivity of the system to the environment and the subject. This mechanism embodies the coupling of perception and cognition in the artificial consciousness system to ensure that information acquisition is consistent with the task goal.

3.3 Knowledge layer: large model knowledge fusion and personalized cognitive reasoning module

Sub-objective: To build a cognitive reasoning module that integrates large-scale pre-trained models and domain knowledge, which can absorb general medical knowledge and individual characteristics, and achieve in-depth understanding and prediction of subject status.

Research Contents:

General Medical Knowledge Base and Ontology Construction: The knowledge layer is the systematic brain "memory" that carries a lot of background knowledge about Alzheimer's disease and cognitive science. The project will first build an Alzheimer's disease knowledge base, including disease mechanisms, cognitive function architecture, assessment scale knowledge, interventions, etc. Information sources can be existing knowledge graphs (e.g., AD-related concepts in UMLS), medical guidelines, and scientific literature. This part of knowledge is organized in the form of ontology and knowledge graph, and the key nodes such as "hippocampus-memory", "acetylcholine-neurotransmitter", "cognitive training-plasticity", etc., include cause and effect, association, subordination, etc. This provides an expert semantic network for interpreting the state of the system and planning interventions.

Large model knowledge fusion: On this basis, large pre-trained models (LLMs) are introduced to enrich the inference ability of the knowledge layer. LLM has been trained in a large number of corpora, contains a wide range of language and common sense, and has a good understanding of texts such as descriptions of cognitive impairment and medical history. In the project, we will use a large Chinese model such as GPT-4 or Chinese based (medical specific models need to be considered) and adapt it to the domain. This is done by prompting learning or fine-tuning to enable the model to: (1) answer questions about AD and cognitive function based on the knowledge base (e.g., "What brain regions may be caused by decreased function of sustained attention?"). “); (2) provide interpretation of multimodal information for specific patients (e.g., generate an analysis report based on input cognitive test results + EEG characteristics); and (3) assisted decision support (listing optional interventions for a certain cognitive impairment, etc.). In order to avoid the illusion of large models, the knowledge layer will adopt the Retrieval Enhanced Generation (RAG) framework: that is, before the LLM generates inference, the relevant fragments are retrieved from the knowledge base as contextual hints to ensure that the model answers are based on authoritative knowledge. For example, when a model is needed to evaluate a patient's status, it first searches the knowledge base for "MCI definitions" and "cognitive training methods", and then provides the LLM with the patient's information for comprehensive responses. Through this deep integration, the knowledge layer is equivalent to an intelligent assistant that "has massive medical knowledge and can understand the context", which greatly enhances the reasoning depth of the system.

Personalized cognitive model and digital twin: The knowledge layer not only contains general knowledge, but also gradually accumulates specific knowledge for each individual, so as to form a "personalized cognitive model". The project will build a "cognitive digital twin" for each participant: a subgraph of individual knowledge maintained at the knowledge layer, recording the person's background and previous interactions. Such as family genetic history, education level, baseline cognitive assessment results, performance curves for each training session, response to different interventions, etc. This information is constantly updated, giving the system a more and more complete picture of the individual. Based on this, the knowledge layer can invoke different inference paths for the same task and different people. For example, for an elderly person with visual impairment, the system "knows" that their visual perception test results are chronically low, which reduces the importance of visual test results in reasoning and reminds the intelligence layer to consider visual compensation (which is a knowledge adjustment for individuals with different attributes). In addition, digital twins can be used to simulate predictions: the system can internally adjust certain parameters to estimate how a person's cognitive indicators would change if an intervention was taken, in order to assess the potential for intervention. This is similar to a doctor's prediction of a patient's likely progress based on past experience, except that the knowledge layer is based on cumulative data and large model reasoning. This personalized cognitive reasoning module will give the system a "growing understanding of you" for each person, realizing true individualized intelligence.

Inference and Interpretation Mechanism: The knowledge layer will implement an inference engine, which will comprehensively use ontology rules and large models for cognitive state assessment and reasoning of intervention suggestions. Reasoning styles include deductive (e.g., rule-based: if short-term memory test scores drop and theta waves increase significantly, "working memory declines"), inductive (based on past data, someone's pattern is similar to cases where previous interventions worked), and analogy (large models make recommendations based on similar textual cases). It is important that the reasoning process of the knowledge layer is transparent and explainable to the upper layer. To do this, we will record the chain of inference, especially when the LLM participates, capturing the intermediate thought steps. For example, before LLM can draw conclusions, we ask them to list the causes and rationale, which will be translated into structured explanations (e.g., "Based on a 20% persistent decline in memory scores and a 50% increase in EEGθ waves above baseline in the last 3 months, combined with the findings from literature X, it is determined that memory deterioration is rapid, possibly due to hippocampal degeneration"). These explanations serve as knowledge evidence to provide reference for decision-making at the intelligence layer, and also improve the credibility of the system. Through perfect knowledge fusion and reasoning, the knowledge layer will play the role of a smart brain in the whole system, providing a solid basis for complex cognitive judgment and strategy formulation.

3.4 Intelligence layer: feedback adjustment and strategy evaluation system based on artificial consciousness

Sub-goal: To realize the "smart" functional module of the artificial consciousness system, which is responsible for integrating knowledge for decision-making, dynamically adjusting intervention strategies, and evaluating the effect, and continuously optimizing in the closed loop.

Research Contents:

Comprehensive assessment and decision-making of cognitive state: The intelligence layer is equivalent to the decision-making center in the artificial consciousness system. It receives an assessment of an individual's current cognitive state and long-term trends, as well as recommendations for potential intervention options, and then makes intelligent decisions based on this—i.e., the selection of appropriate action strategies. Specifically, this project will develop a comprehensive cognitive state assessment model at the wisdom layer: synthesize multi-dimensional information from the knowledge layer (memory ability rating, attention ability rating, emotional state, etc.) to generate a quantitative representation of the patient's overall state (such as a cognitive health index of 0-100), and judge the current major problems (such as "short-term memory is significantly lower than the population average", "depression may affect cognition"). On this basis, the intelligent layer calls the policy library to select the intervention action. The strategy library includes a variety of possible measures, such as: adjusting the difficulty of the training (increasing/decreasing the difficulty of the task), changing the type of training (memory-> attention), increasing rest or motivation, suggesting medication counseling or referrals, etc. Decision-making at the intelligence level uses certain optimization criteria, such as the expectation of maximizing cognitive improvement or multi-objective balance (both improving memory and not lowering mood). It can be modeled as a multi-armed gambling machine or a Markov Decision Process (MDP) problem, and reinforcement learning can be used to learn the optimal strategy from the interaction history. Examples of decisions made by the intelligence layer include: "In view of the user's recent slow progress and increased fatigue on memory tasks, it was decided to reduce the frequency of memory tasks and add fun games to increase engagement"; "After this training, the attention index was still low, and it was decided to trigger an EEG neurofeedback training to enhance attention", etc. Through the wisdom layer, the system transforms knowledge into practical action, realizing the leap from "knowledge" to "action".

Artificial Consciousness Feedback Modulation Mechanism: The intelligence layer is unique in that it does not enforce fixed rules, but has the ability to self-feedback and regulate—an important representation of artificial consciousness. We will introduce a "dual loop" architecture, that is, in addition to the basic perception-decision loop described above, add a metacognitive loop. Metacognitive cycles are initiated by the intelligence layer to monitor and regulate the effects of the basic cycles. For example, the intelligence layer continuously monitors the implementation of intervention strategies: if the desired improvement is not achieved, or if there is a negative impact (such as user abandonment due to increased training difficulty), the metacognitive module of the intelligence layer will identify this bias. It then reflects on the adequacy of the basis for the decision, whether there are any omissions in the knowledge reasoning, and may retrieve more information or update the model parameters through the knowledge layer. Then, the intelligence layer adjusts the decision-making strategy in the future to avoid the recurrence of similar problems. This cycle of self-monitoring-reflection-regulation allows the system to gradually improve. To put it simply, the intelligence layer is not only concerned about the effect of current decisions on patients, but also evaluates the quality of their own decisions and continuously learns and improves. In terms of technical implementation, the metacognitive module can record the data of each decision and its consequences, use machine learning methods to find out the pattern of decision errors, and refer to it in subsequent decisions (such as increasing penalties). Through this feedback adjustment, the system has a certain ability to "introspect", just as humans will reflect on their own judgment and do better next time.

Strategy evaluation and effect prediction: In order to achieve a true closed loop, the intelligence layer needs to evaluate its effect after each round of intervention and feed back the results to the data layer and the knowledge layer to form a complete loop. The project will develop an effect evaluation model at the intelligence layer: on a short-term scale, the immediate effect will be evaluated based on the data during the training process (such as changes in task performance and trends in physiological indicators), such as whether the attention index increases after an attention training, and whether the heart rate decreases indicates an increase in concentration. On the medium to long-term scale, periodic assessments (e.g., weekly MoCA tests or memory tests) are used to verify whether the overall cognitive trend is improving. For example, if the system decides to focus on memory training for a month, the memory score is reassessed after one month to see if there is a statistically significant improvement. The intelligence layer compares these evaluation results with the original expected goals and outputs a score of the strategy effect. If a strategy works less than expected, the intelligence layer will mark the strategy as less suitable under similar conditions, and will tend to choose other options in the future (which is also part of metacognitive moderation). Conversely, if the effect is outstanding, the strategy pattern is reinforced. At the same time, the intelligence layer can use the knowledge layer to make predictions about the potential effect to assist decision-making. For example, knowledge-based simulations show that continuing the current training regimen for two months improves memory by 5%, while switching to another regimen may improve memory by 8% with increased fatigue. Then the wisdom layer evaluates the pros and cons accordingly and may choose to adjust. In the continuous evaluation-prediction cycle, the system realizes adaptive optimization: towards the predetermined cognitive improvement goal, the path is dynamically adjusted to approach the goal as quickly and stably as possible.

Safety Monitoring and Ethical Constraints: The decision-making of the intelligent layer is always monitored by the safety and ethics module to ensure that the behavior of the artificial consciousness system is in line with medical ethics and the interests of patients. This module includes pre-set rules of restraint (e.g., not to cause excessive stress to training, not to give medical diagnosis or medication advice without going to a doctor), and values (e.g., prioritizing patient dignity and privacy). When the intelligence layer makes a policy such as "increase training intensity", the security module will review whether the security threshold (such as the total training time limit per day) is exceeded. If violated, execution will be rejected and adjustments will be prompted. In terms of ethics, the intelligence layer needs to maintain empathy and respect when interacting with patients, such as refraining from the use of cognitive comments and not hurting patients' self-esteem. This can be achieved through ethical knowledge in the knowledge base and tone control in LLMs. The safety and ethics module is equivalent to the "superego" of the system, providing value orientation in the closed loop of artificial consciousness and ensuring that the application of technology in medical care does not deviate from the right track.

3.5 Purpose layer: cognitive goal guidance and individual state reconstruction mechanism

Sub-objectives: To achieve the "intention" function at the top level of the system, set clear cognitive rehabilitation goals, dominate the overall strategic direction, and provide goal reference for the lower levels by reconstructing individual states.

Research Contents:

Cognitive Rehabilitation Goal Setting: The purpose layer is responsible for determining the ultimate purpose of the system's operation. Based on the experience of clinical experts and the needs of patients, this project will set several levels of goals at the purpose level: the overall goal is to "delay the progression of Alzheimer's disease and maintain and improve the cognitive function of patients". This overarching goal can be broken down into a number of specific, quantifiable sub-goals, such as "no more than 5% decline in memory score in one year" or "improvement in daily living ability score of 3 points". Shorter-term goals such as: "This month's main attack will increase the attention span to more than 5 minutes". These goals range from hard indicators such as cognitive test scale scores to soft indicators such as "improvement in patients' subjective well-being". The purpose layer uses this as an evaluation criterion to guide the work of each layer towards the goal. For example, when the overall goal is memory-focused, the intelligence layer will tilt the choice of memory tasks when weighing multi-task training. These goals are scientifically based on the program team and can be individualized to the patient (e.g., the goal is to regain work capacity in young patients with MCI and the goal is to maintain daily independence in older patients). Intentions can be updated as conditions change, but remain clear over a period of time, providing a steady direction.

Individual state of consciousness reconstruction: Another key function of the intentional layer is to reconstruct and model the overall conscious/cognitive state of the individual. This "reconstruction" refers to the formation of a high-level abstraction of the patient's current cognitive and psychological state within the system, almost similar to the internal simulation of the patient's "mind" by the system. This can be seen as a "subjective projection" of artificial consciousness to the user. The implementation method is to extract several core state variables on the basis of the output of the integrated intelligence layer and the digital twin of the knowledge layer, such as: "cognitive vitality" (reflecting the overall cognitive health), "learning motivation", "emotional stability", "behavior compliance", etc. The purpose layer scores or describes these high-level states by analyzing all recent data and decision results. For example, after a series of tasks, the system may reconstruct the following: "Current user cognitive vitality = medium (down 2 points from the previous month), learning motivation = low, emotion = good, adherence = high". This reconstructed state helps the system understand that the user may have been slacking off recently and needs to be motivated by more interesting tasks; The mood is good, and the challenge can be appropriately increased, etc. In the reconstruction process, with the help of LLM's ability to summarize and summarize multi-modal information, the complex data is sublimated into high-level semantics. The purpose layer regularly updates this internal state model, allowing the system to form an evolving mental portrait of the user.

Global Intent-Driven Coordination: With clear goals and a high-level understanding of the user, the purpose layer acts like the "commander" of the system, signaling the global drive to the lower layers. This drive is manifested in the following aspects: (1) Task priority allocation: according to the current intent, decide the type of task and data to focus on next. For example, if the goal is to improve attention, the instruction information layer and the intelligence layer will devote more resources to the evaluation and training of attention-related tasks, and the data layer can also enhance the relevant data collection according to the instructions. (2) Resource and parameter control: The purpose layer can macroscopically adjust some parameters of the system, such as the total training time and the maximum number of tasks per day, to coordinate them with the goal. If the user is showing fatigue and the purpose is to persist for a long time, the purpose layer may give instructions to "reduce the time per training session but increase the frequency". (3) Emergency purpose handling: If the knowledge layer finds some emergency situation (such as a sharp deterioration of cognitive ability or the risk of emotional collapse), the purpose layer can instantly raise the priority of the corresponding target and trigger an emergency strategy (such as immediately notifying the clinician to intervene). Together, the purpose layer uses these means to twist the parts of the system together to ensure that all decisions are made around the end goal.

Human-machine collaboration and subjectivity maintenance: It is worth emphasizing that the purpose layer should also fully consider the patient's own wishes and human-computer collaboration when setting goals. That is, the purpose of the artificial consciousness system needs to be in line with the purpose of the person. To this end, the project will involve patients and their families in the goal-setting process (e.g., interviews to get the features they care about most) and design an interface to let patients understand the goals of the system and recommendations for adjustments. If the patient rejects a training or goal, the system needs to make compromises. This prevents the AI from forcibly pursuing certain metrics and ignoring human feelings. Finally, through intent-level coordination, the unity of machine intelligence and human purpose is realized: the ultimate purpose of the system is to help patients achieve their desired state of health. In this process, patients gradually benefit from the system and feedback their own subjective experience, forming a positive cycle of human-centeredness.

Through the layer-by-layer research and integration of the above five-layer modules, this project will theoretically build the structure of the DIKWP artificial consciousness-driven cognitive assessment-intervention system. At each layer, we not only carry out methodological innovation, but also pay attention to the interface and collaboration between the upper and lower layers to ensure the integrity and closed-loop nature of the entire system architecture. On this basis, we will further elaborate on how each module connects and forms a closed-loop driving logic in the technical roadmap section, as well as the main innovation points of the project.

4. The technical route and innovation points

This project takes the five-layer structure of DIKWP artificial consciousness as the core engine, builds a complete technical route according to the chain of "data→ information→ knowledge→ wisdom → intention", and realizes layer-by-layer feedback through two-way coupling, forming a closed-loop driving logic from data to intent. Figure 1 (omitted) illustrates the overall architecture and information flow direction of the project system: the underlying sensors collect multimodal data, which is converted into task-related information through pattern recognition at the information layer, the knowledge layer integrates domain knowledge for advanced reasoning, the intelligence layer makes intervention decisions and evaluates the effect accordingly, and the top purpose layer provides goal guidance and global coordination. At the same time, there is bottom-up information transmission and top-down control feedback between each layer, and the interaction between the two layers constitutes a network coupling relationship. This multi-layered cyclic system breaks through the traditional linear input-processing-output model, making the system human-like flexible and adaptable.

The technical route is described in steps as follows:

Data acquisition and preprocessing (corresponding to the data layer): In the real application scenario, elderly subjects wear equipment such as EEG and fNIRS to enter the virtual reality or computer cognitive training environment. The system first collects multi-modal data according to the predetermined scheme, and performs preprocessing and feature extraction in real time. This step solves the problem of "objective perception" and provides a rich and reliable source material. The innovation lies in the closed-loop data acquisition of multimodal fusion: we have designed a mechanism to dynamically adjust sampling and sensing according to the needs of the task, so that the perception process is no longer passive, but driven by high-level needs, reflecting a certain "attention" allocation function (the preliminary feature of artificial consciousness).

Pattern Recognition and Information Generation (corresponding to the information layer): The acquired data stream is fed into the information layer for pattern recognition. Through machine learning models and task semantic rules, the system maps low-level signal patterns into meaningful information (metrics, events). This stage is equivalent to mimicking the process of interpreting sensory signals in the human cerebral cortex. The innovation of the technical route lies in task-driven information extraction: instead of looking at the signal in isolation, it refines the information in combination with the current task situation. For example, the same theta wave enhancement has different meanings in the memory task and attention task, and the system can distinguish and assign corresponding semantics. This contextual awareness makes the information layer output more useful to the upper layer.

Intelligent reasoning and knowledge fusion (corresponding to the knowledge layer): The information layer output is submitted to the knowledge layer along with historical data and a series of background knowledge. The knowledge layer uses large models and knowledge graphs to conduct comprehensive reasoning on the current state of subjects, form a diagnostic assessment of cognitive function and analysis of potential causes, and retrieve appropriate intervention knowledge. This step is equivalent to an AI "expert" analyzing the data and making recommendations. The innovations include: (a) deep integration of large model + knowledge graph: we let LLM reason under knowledge constraints to ensure the accuracy and relevance of answers; (b) Individual digital twins: The knowledge layer not only has general medical knowledge, but also individual-specific models, so that the reasoning can truly be differentiated from person to person. This gives the system a greater degree of customization than traditional expert systems.

Decision-making and strategy execution (corresponding to the wisdom layer): Based on the judgment and suggestions provided by the knowledge layer, the intelligence layer optimizes the decision and selects the best intervention strategy. The policy is then sent to the underlying level for execution (e.g., controlling the difficulty of the VR situation, switching tasks, etc.) to intervene on the user. At the same time, the intelligent layer monitors the execution process and evaluates the immediate effect. This link closes the "sense-decision-act" loop, allowing the system to have an impact on the environment. The innovations are reflected in: (a) closed-loop decision-making: Different from the traditional open-loop system, we introduce reinforcement learning and other mechanisms to allow AI to continuously update strategies according to the implementation effect, so as to achieve adaptive evolution of decision-making; (b) Metacognitive feedback: The intelligence layer evaluates the effectiveness of its own decisions, and requests more information from the knowledge layer or adjusts the model parameters if necessary. This is equivalent to the self-reflection of artificial consciousness, so that the system can gradually optimize the quality of decision-making.

Objective management and global governance (corresponding to the purpose layer): The highest level of intent, the purpose layer, continuously monitors whether the entire system is operating towards the set goals. If deviations are detected, such as cognitive function not improving or even worsening, the purpose layer will change the targeting strategy (e.g., extend the project time, adjust the main target area) and notify the levels to reconfigure. At the same time, the purpose layer communicates with users/clinical experts about target modifications to ensure that system goals meet human expectations. The innovation at this level is to clearly implant the human goal orientation into the AI system, so as to form a closed loop that is guided by the goal and runs through the top and bottom. Traditional AI systems usually do not have an internal target concept, but the DIKWP system achieves this through the purpose layer, so that the processing of data, information, knowledge, and wisdom all revolves around the target service. This is a unique advantage of the artificial consciousness model.

In the process of realizing the above technical routes, this project will focus on the following innovations:

(Innovation point 1) The first application of artificial consciousness closed-loop architecture in the medical field: This project will take the lead in applying the DIKWP artificial consciousness model to Alzheimer's disease assessment and intervention scenarios, and build a full-link closed-loop system of perception-cognition-decision-intention. This architecture breaks through the current one-way process-based model of medical AI and realizes the self-regulation and evolution of the system. In particular, the introduction of the dual-cycle structure (cognitive cycle + metacognitive cycle) has made the system initially have a prototype of self-consciousness. This is a groundbreaking exploration in the world, heralding the leap from automation to autonomous intelligence in intelligent medical systems.

(Innovation point 2) Intelligent semantic fusion of multi-source heterogeneous data: The project creatively integrates brain signals, physiological data and cognitive task semantics, and contextualizes pattern recognition by the information layer. This kind of fusion is different from general multimodal fusion, which is not only the superposition of the data level, but also gives meaning to the data at the semantic level, which greatly improves the explanatory and decision-making relevance. For example, the system can explain "why this mission failed" by combining the EEG and the task requirements to provide a cause analysis, rather than just outputting a failure label. This intelligent semantic fusion greatly improves AI's ability to understand complex clinical phenomena.

(Innovation point 3) Personalized cognitive reasoning driven by large models: We introduce large models (such as GPT) as one of the core inference engines at the knowledge layer, and overcome its black box and hallucination problems through knowledge injection and structured prompts. More importantly, the project introduces the concept of digital twins, allowing the large model to be "customized thinking" for each user. At present, there are few studies on the use of LLMs for tracking and reasoning of individuals' continuous cognitive states, and our solution allows LLMs to truly become "AI confidants" of patients, accumulating over time to understand their characteristics and provide tailored suggestions. This is a major advance in the management of cognitive impairment by AI, making it possible to humanize artificial intelligence.

(Innovation point 4) Interpretable and controllable cognitive operating system prototype: The project is committed to realizing the concept of "embedding DIKWP into AI to form a semantic operating system" proposed by Professor Duan Yucong. In our system, every step of the process, from data to intent, has a clear meaning and can be traced back. By monitoring the inference link of the LLM at the intelligence layer and defining a five-layer semantic mapping at the knowledge layer, we have created a human-friendly cognitive operation platform that allows doctors and researchers to see how AI draws conclusions and decisions step by step. This kind of white-box AI is particularly important in medical care, which directly solves the "black box" problem of deep learning and improves the reliability of the system. Technically, this is also a highlight of the artificial consciousness system: man and machine share the same cognitive language and logic.

(Innovation point 5) Novel adaptive control strategy in the field of cognitive intervention: This project combines reinforcement learning with artificial consciousness to create a goal-driven cognitive intervention control method. In the past, cognitive training mostly used fixed protocols or simple performance-based adjustments, and lacked systematic optimization. The intent-based reinforcement learning we implement at the intelligence layer enables the system to explore the best intervention path through trial and error, while ensuring that it does not deviate from the rehabilitation goals set by humans. This self-optimized intervention strategy is proposed for the first time in the field of AD rehabilitation, and is expected to significantly improve the intervention effect and efficiency. For example, the system may find that for a certain type of patient, "short-frequency multiple" training is more effective than "long-term concentration", and it will automatically tend to the former. This is similar to adaptive control in AI physical control, but it is applied to a complex system such as human cognition, which is a cross-domain innovation.

(Innovation point 6) Artificial intelligence ethics and human-oriented integration practice: While pursuing technological breakthroughs, this project focuses on integrating ethical considerations into technical solutions. We ensure that AI decision-making is always centered on the interests of patients through the value constraints of the purpose layer and the safety supervision of the intelligence layer. In the system design, patients and experts are involved in goal setting to achieve human-machine collaboration. These explorations set the benchmark for responsible innovation in the application of AI in healthcare. In particular, on the frontier of artificial consciousness participation in decision-making, we have formulated clear ethical norms to make AI not only intelligent but also kind, and promote the implementation of AI ethical governance in specific medical products.

Through the implementation of the above technical routes and the realization of innovation points, this project is expected to promote the deep integration of artificial consciousness theory and brain health in academics, establish a new paradigm of intelligent cognitive impairment intervention system in technology, and provide breakthrough tools for early intervention of Alzheimer's disease in application. Next, we will introduce the project team and basic conditions to prove that we have the strength and resources to complete the above research.

5. Project team and basic conditions

1. Project Team Overview: This project is led by Professor Duan Yucong, whose team brings together multidisciplinary experts in the fields of artificial intelligence, cognitive science, biomedical engineering, etc., and has strong strength and experience in implementing this project. Prof. Yucong Duan is the originator of the DIKWP artificial consciousness model and an international leader in the field. Professor Duan Yucong is currently a Corresponding Member of the National Academy of Artificial Intelligence of the United States, a Foreign Academician of the National Academy of Sciences of Serbia, an Academician of the International Academy of Advanced Technology and Engineering, and the Chairman of the World Association of Artificial Consciousness. In terms of the basic theory of artificial intelligence, Professor Duan has long been committed to the research of cognitive computing and semantic technology, and was shortlisted in the top 2% of the world's top scientists in 2022. He took the lead in proposing "intent/purpose" as the core of AI architecture, and established a mesh DIKWP model to achieve two-way feedback and iterative update of semantics at all layers. This theory is an international milestone and has become a new way to solve the problem of black box and AI interpretability of large models. As the first inventor, Professor Duan has been granted 114 invention patents, including 15 PCT international patents, covering many cutting-edge fields such as large model training, artificial consciousness construction, cognitive operating system, AI governance, and privacy security. These patents form a complete intellectual property portfolio of the DIKWP technology system. For example, his patent proposes a "dual circulation" artificial consciousness architecture and introduces metacognitive self-monitoring, which is regarded as an important way to give AI initial self-awareness. In addition, a number of patents embed the DIKWP model into the LLM inference process to create an interpretable semantic operating system. and innovative solutions for multimodal semantic graph coupling and LLM hallucination prevention and control. These technical achievements have laid a solid foundation for this project. Professor Duan has recently presided over and participated in a number of national projects, published dozens of papers in top international journals, and participated in the development of 2 IEEE international standards for financial knowledge graph and 4 industry knowledge graph standards. Other core members of the team include: cognitive neuroscience experts, who are good at the application of EEG/imaging technology in cognitive aging; Medical informatics expert, responsible for fine-tuning the field of medical knowledge graph and LLM; Software engineering expert, responsible for system integration and platform development, etc. This combination of disciplines ensures that the team can collaborate efficiently across the entire chain, from artificial consciousness theory to algorithm development to medical applications.

2. Accumulation of technology in artificial consciousness modeling: The team has conducted in-depth research and achievements in the theory and practice of the DIKWP artificial consciousness model. In addition to theoretical papers and patents, we have developed prototype systems of the DIKWP architecture and successfully validated the application possibilities in medical and other fields. For example, the team built the world's first artificial consciousness prototype system, applied DIKWP to human-computer interaction in gout diagnosis and treatment, and realized the intelligent Q&A and solution suggestions of physician assistant AI on patients' symptoms. In this prototype, the data layer perceives the patient's symptom data, the knowledge layer invokes the medical knowledge base, the wisdom layer gives the diagnosis and treatment plan, and the purpose layer ensures that the AI decision-making complies with medical standards. The whole process fully reflects the synergy of all levels of artificial consciousness and achieves good results. This provides valuable lessons for us to extend similar architectures to Alzheimer's disease scenarios. In addition, the team also explored the application of the DIKWP model in the fields of traditional Chinese medicine diagnosis and treatment, industrial control, etc. For example, we simulated the whole process of diagnosis and treatment of cold-pharyngitis-bronchitis in TCM based on DIKWP, and proved that the artificial consciousness system can handle complex multi-stage decision-making. These cross-domain cases show the universality and portability of the DIKWP model, and also exercise the team's ability to model different knowledge systems.

3. Knowledge structure generation and semantic technology advantages: The team has strong strength in the direction of knowledge graph and semantic computing. Professor Duan Yucong also serves as the vice president of Hainan Invention Association, and leads the team to build knowledge graphs and ontologies in various vertical fields. For example, the international standard project for financial knowledge graphs in which we participated involved involved the construction of huge semantic networks and inference methods, which cultivated our ability to deal with complex knowledge structures. In terms of medical knowledge, our members have participated in the construction project of the TCM encephalopathy knowledge base of the State Administration of Traditional Chinese Medicine, and are familiar with medical ontology and clinical semantic data. In addition, the "semantic mathematics" method developed by the team can mathematically express the conceptual relationships in the knowledge graph and support computational reasoning. These technical reserves will contribute to the construction of the Alzheimer's disease knowledge base and cognitive ontology in this project, as well as the realization of the combination of knowledge and large models. In particular, the RDXS relational definition semantic model (a unified semantic framework proposed by Professor Duan) can map incomplete and inconsistent information to the DIKWP graph to achieve multi-source knowledge fusion. This is helpful for dealing with the complexities of real-world medical data.

4. Experience in the development of cognitive regulation systems: The team also has relevant foundations in cognitive training systems and human-computer interaction. Our engineering team has developed the "Brain Health Training" APP, which includes attention training games and memory training tasks, etc., and has first-hand experience of user interface friendliness and motivation mechanisms for the elderly. The team has also cooperated with local hospitals in Hainan to carry out small-scale experiments on cognitive intervention for MCI patients, and is familiar with cognitive assessment methods (such as MoCA, RBANS scales) and statistical analysis methods of intervention effects. This will help us to carry out system validation and performance evaluation in this project. The team is equipped with talents with biomedical engineering background, with experience in near-infrared brain imaging equipment and 64-conduction EEG acquisition system, which can successfully build the data acquisition hardware environment of this project. At the same time, we have young PhDs involved in VR development, who can design targeted VR task scenarios based on the principles of cognitive science to ensure that the intervention content is both scientifically effective and interesting and practical. In addition, the team has established contacts with many hospitals and elderly care institutions in China, and can recruit volunteers for early AD population to carry out systematic testing in the later stage of the project. These resources and preliminary work provide the necessary conditions to support the implementation of the project.

5. Infrastructure and supporting conditions: The team provides a good scientific research environment and conditions for the project. Relying on the laboratory platform of the School of Computer Science, we have high-performance computing servers (GPU clusters) for large model fine-tuning and deep learning training. There is a dedicated cognitive neuroscience lab for EEG/behavioral experiments, equipped with an electromagnetic shielded room to improve signal quality. The university library and e-resources provide access to a wealth of medical and AI literature, which facilitates our cross-cutting research. The team also received special financial support from the Hainan Provincial Department of Science and Technology, which can be used to purchase the required software and hardware (such as portable fNIRS cameras, VR headsets, psychological experiment software, etc.). In addition, the team has established a clinical research center with a local tertiary hospital, which can facilitate translational medical research. We plan to set up a clinical trial site in the geriatric department of the hospital in the middle of the project to verify the actual effect of the developed system, and this process will be strongly supported by the cooperation channel between the university and the hospital.

In summary, the project team has outstanding advantages and rich accumulation in artificial consciousness modeling, knowledge graph, intelligent system research and development, etc. We not only have top-notch theoretical guidance, but also a solid engineering and clinical foundation, and we have a tacit understanding of cross-disciplinary integration. The school and relevant units provided the necessary venues, equipment and financial conditions. All these will ensure the smooth development of the research work of this project and provide a strong guarantee for the achievement of the expected goals.

6. Expected results and transformation paths

This project aims to generate significant scientific outputs and practical value. The expected results include theoretical innovation achievements, technology implementation achievements and application demonstration achievements, and on this basis, a clear path for the transformation and promotion of achievements will be formulated to help the national smart medical care and aging health.

1. Theoretical innovations: At the scientific level, this project will refine and verify the applicability of DIKWP artificial consciousness theory in the field of cognitive impairment assessment intervention, and is expected to form high-level academic papers and monographs. For example, we will summarize the methods used by artificial consciousness systems to model early cognitive decline in Alzheimer's disease and publish them in top journals of artificial intelligence or neural engineering, filling the research gap in the application of artificial consciousness in digital health. We also plan to refine the conceptual framework of "cognitive closed-loop intervention" and form industry reports or monographs to provide theoretical reference for future smart elderly care and cognitive impairment intervention. In addition, through the project research, new interdisciplinary topics (such as "artificial consciousness + digital neuromedicine") may be expanded, and these theoretical explorations will consolidate China's leading position in the field of basic theories of artificial intelligence.

2. Technical Achievement: At the engineering level, this project will deliver a prototype system of artificial consciousness for early assessment, monitoring and intervention of Alzheimer's disease. The system includes: multi-modal data acquisition hardware suite (EEG cap, fNIRS armband, eye tracker, VR glasses, etc.), back-end intelligent analysis and decision-making software platform, and user interface application. We will apply for software copyrights and invention patents for the system, and it is expected to form a patent group covering core innovations such as the cognitive evaluation method based on DIKWP, the adaptive method of training tasks driven by artificial awareness, and the operating system architecture of medical artificial consciousness. These independent intellectual property rights will lay the foundation for industrialization. In terms of specific technical indicators, the system should achieve the expected effects of increasing the accuracy of the subject's cognitive state assessment by more than 20% (compared with the traditional scale) and reducing the rate of cognitive function decline by more than 30% after intervention (compared with the non-intervention group). We will also open up some non-sensitive modules as SDK interfaces, so that they can be connected with other medical systems, so as to form a standardized technology platform. The platform is expected to pass medical device certification (such as national Class II medical devices), paving the way for subsequent large-scale deployment.

3. Application demonstration results: In terms of application, we strive to complete a certain range of demonstration applications within the project period. Firstly, a trial system was deployed in the geriatric cognitive impairment clinic of the cooperative hospital, and dozens of MCI/early AD patients were evaluated and trained to verify the effectiveness and safety of the system. The clinical data and case reports generated will be used as examples of project outcomes. Secondly, it will be piloted in the community elderly care service center to provide cognitive screening and home training services for the elderly in the community in the portable device + cloud platform model. It is expected to establish at least 2 community demonstration sites, covering the elderly population of 100 people, and explore a new model of integrating artificial awareness and cognitive intervention into grassroots public health. We will summarize the demonstration experience, formulate service processes and management specifications, and provide replicable templates for subsequent promotion. In addition, the application demonstration will also accumulate user feedback for us, which can be used to optimize the human-computer interaction and personalized functions of the system, so that the results are more closely related to the actual needs.

4. Standardization system output: In view of the cutting-edge nature of this project, we will actively participate in the formulation of relevant national/industry standards and upgrade the project results to standardization. With the support of the Professional Committee of Elderly Health Standards of the National Health Commission, we will promote the establishment of the industry standard project of "Technical Requirements for Digital Assessment and Intervention System of Cognitive Function", prepare draft standards and submit them for review. The content covers the performance indicators of multimodal evaluation equipment, data interface formats, and the interpretability requirements of AI algorithm results. We also plan to advocate with domestic experts to develop a "Specification for the Evaluation of Artificial Awareness Medical System" to provide a basis for the evaluation of the safety and effectiveness of similar systems. Through standardized output, it is ensured that the project results can be applied in a larger range in a standardized and safe manner, and lead the direction of industrial development.

5. Industrialization Promotion Plan: After the completion of the project, we will accelerate the industrialization under the industry-university-research cooperation mechanism. Relying on the policy of Hainan Free Trade Port, it is planned to establish a high-tech start-up company or cooperate with existing medical AI companies to focus on the productization and market development of this project. The 114 related authorized patents and new patents in this project will be used as the company's core intellectual property rights. In the short term, the company will refine the prototype system and create commercial versions of the product, including both medical and home versions. The medical version is aimed at hospitals and rehabilitation centers, and the key function is to assist doctors in screening and diagnosing and formulating rehabilitation plans; The Home Edition is aimed at community and individual users, emphasizing ease of use and recreation. We will actively apply for medical device registration, and it is expected that it will take 1-2 years to complete the registration approval and obtain legal sales qualifications. Then, through the demonstration in the tertiary hospitals, it will be expanded to the national geriatric hospitals, and at the same time, it will strive to enter the appropriate technology directory of primary health institutions. In terms of marketing, we will seek the support of insurance institutions and the government to purchase services, and include the system in the scope of chronic disease management to reduce the burden on individuals. It is expected that within 3-5 years, the large-scale application of the product can be realized, serving millions of elderly families, and bringing good social and economic benefits to the enterprise.

6. Smart healthcare integration and long-term vision: In the long run, the results of this project can be integrated as an important part of the smart healthcare system. We will connect with the regional national health information platform to connect the system and electronic health record data to achieve more comprehensive health management for the elderly. For example, the results of cognitive assessment can be uploaded to the health record for the doctor's reference. Or combine wearable device data to realize the linkage monitoring of cognition and physical health. This will promote the integration of cognitive health into the overall framework of smart healthcare. At the same time, we plan to build a cloud-based artificial awareness cognitive service platform, so that grassroots health centers and nursing homes can obtain professional cognitive assessment and training solutions without the need for complex equipment, and only call our AI services through the cloud platform, so as to improve the ability of grassroots services and promote health equity. Looking forward to the future, if the artificial consciousness system is successful in AD intervention, its model can also be extended to other chronic disease management, such as Parkinson's rehabilitation, depression psychological intervention, etc., to achieve a set of artificial consciousness system, a pattern applicable to multiple scenarios. It can be seen that this project is not only a special research on AD, but also a cutting-edge exploration of artificial intelligence-enabled medical care, and the concepts and technologies formed will inject new vitality and direction into smart medical care and smart elderly care.

In summary, this project is expected to produce fruitful research results, which will not only enable the publication of high-level papers, obtain independent intellectual property rights, but also create a practical application of artificial consciousness cognitive intervention system. Driven by national policies and market demand, these achievements will be rapidly transformed and implemented through standardization and industrialization, bringing tangible benefits to Alzheimer's disease patients and the elderly, and at the same time promoting the innovation and development of China's artificial intelligence and medical device industry, which has great social significance and economic value. We are confident and capable of successfully completing various tasks within the project cycle and delivering an excellent answer sheet for the national key R&D plan.