Call for Collaboration:Research on Brain-Inspired Multimodal Tactile Chips and Sensory Control Systems Based on the DIKWP Model and Artificial Consciousness
World Academy for Artificial Consciousness (WAAC)
- International Standardization Committee of Networked DIKWP for Artificial Intelligence Evaluation(DIKWP-SC)
World Artificial Consciousness CIC(WAC)
World Conference on Artificial Consciousness(WCAC)
Email: duanyucong@hotmail.com
Directory
1. Background and significance of the study 2
1.1 Background requirements for brain-inspired multimodal tactile perception
1.2 DIKWP mesh model: modeling advantages of cognitive-perceptual-execution closed-loop
1.4 Benchmarking strategic value with international frontiers
2. Research objectives and overall technical route
4. The hardware system and software and hardware collaborative design
5. Phased achievements and assessment indicators
6. Feasibility analysis and transformation plan
6.1 Feasibility Analysis: Research Basis and Team Advantages
6.2 Achievement Transformation Plan: Application Prospects and Promotion Strategies
1. Background and significance of the study
1.1 Background requirements for brain-inspired multimodal tactile perception
With the development of artificial intelligence and robotics, giving machines human-like tactile perception has become the key for agents to achieve dexterous operation. Compared with perception systems that rely only on vision, the sense of touch, as an important "language" for agents to interact with the environment, can provide rich information such as mechanics, temperature, and texture of objects, which helps robots complete more refined and safer operations. For example, when human fingers assemble tiny parts or operate tools, they are constantly corrected by closed-loop haptic feedback to achieve micron-level control that far exceeds the accuracy of visual positioning. In robotics applications, the introduction of high-resolution multimodal tactile sensing is expected to enable robots to have human-like ability to correct small errors, and complete delicate tasks that in the past only skilled craftsmen could do.
However, the effective integration and application of multi-modal haptics (such as force sense, temperature, slip sense, material texture, etc.) faces many challenges: first, the data dimension of multiple sensing signals is high and the update frequency is fast, and the traditional serial acquisition and centralized processing architecture often leads to high latency and high energy consumption, which is difficult to support closed-loop control in time. Secondly, the information units of different modes are different (such as the magnitude of force, temperature change, vibration spectrum, etc.), and how to effectively compress and extract features without losing key details poses a challenge to the existing information processing. Traditional force control methods usually rely on low-dimensional simplified models, which are difficult to cover such a rich range of tactile information. Therefore, a new brain-like information processing paradigm is urgently needed to efficiently integrate multimodal tactile information and improve the system's ability to interact with the environment through closed-loop feedback.
From the perspective of biological inspiration, the human somatosensory system itself has a highly parallel and distributed multimodal perception mechanism. For example, the human hand can simultaneously sense the pressure, temperature, and texture of an object, and integrate these sensory signals in the nervous system in real time for judgment and decision-making. Multimodal fusion has significantly improved the ability to evaluate and recognize the attributes of objects. On the other hand, if the traditional linear scheme of acquisition before calculation is used in the manual system, not only the processing link is lengthy, but also the bandwidth and power consumption are huge because each sensing signal needs to be digitally transmitted and processed 。 This has pushed researchers to explore more brain-like hardware and algorithms, such as neuromorphic chips, which can complete the fusion and compression of different modal signals at the front end of the sensing, thereby reducing the processing burden on the back end. It has been proved that by integrating new devices such as memristors into pressure sensors, multi-modal signals such as pressure and temperature can be fused in situ into a unified pulse sequence, which not only significantly improves the data compression rate and response speed, but also ensures the independent decoding and high fidelity of different modal information in the fusion signal. These advances provide important ideas for the development of efficient multimodal haptic sensing hardware.
1.2 DIKWP mesh model: modeling advantages of cognitive-perceptual-execution closed-loop
In order to give full play to the role of multimodal haptics in intelligent closed-loop control, it is necessary to have corresponding cognitive models to effectively characterize and regulate the whole process from perception to decision-making to executive feedback. In this regard, the DIKWP model proposed by the team of Professor Yucong Duan of Hainan University in China provides a new solution. DIKWP is a new cognitive framework based on the classic "Data-Information-Knowledge-Wisdom" cognitive hierarchy model with a "Purpose/Purpose" layer. More importantly, the DIKWP model breaks the limitations of the traditional DIKW pyramid of one-way progression, and transforms the five cognitive elements from a linear hierarchical structure to a highly networked network structure. This network DIKWP model allows multi-directional coupling feedback between various levels, especially the purpose at the highest level, which can influence the perception and decision-making process downward, so as to more realistically describe the cognitive mechanism of the complex dynamics of the human brain. The results show that after the DIKWP model introduces the purpose dimension and adopts the network interaction structure, up to 25 pairwise interaction paths can be formed between the five elements, including bottom-up perceptual transmission (such as data→ information, information → knowledge), top-down purpose regulation (such as purpose→ Wisdom, Wisdom → knowledge), and same/cross-layer parallel two-way interaction (such as information ↔ knowledge, Wisdom ↔). Purpose, etc.). This means that the cognitive system is able to interact dynamically at different levels, just like the brain, such as high-level purpose to guide attention allocation, memory and emotion to influence the interpretation of new information, etc.
The introduction of the DIKWP mesh model in the brain-like multimodal tactile system has significant modeling advantages. First, it provides a clear semantic structure support for the formation of a closed loop of perception-cognition-execution-feedback: the original tactile data obtained by the underlying sensor (D) is processed and upgraded into environmental information (I), which is further integrated to form knowledge of the environment and the contact object (K), thereby supporting the system to make wisdom-based decisions or action plans (W). , and ultimately the Purpose (P) layer of the system directs and adjusts the entire process. Through such hierarchical abstraction, the sensing, cognition, and decision-making modules in the robot's tactile system have a unified semantic interface, so that information can circulate between different levels without losing meaning. Second, the network structure of DIKWP allows the cognitive layer to inversely guide the perception layer: for example, when the destination layer sets a specific operation purpose, the upper layer will impose "expectations" on the lower level of perception, so as to achieve predictive perception and attention modulation similar to the human brain. This helps to extract the most relevant features for the task at hand in the flood of tactile data, enabling cognitive-driven information compression and improving processing efficiency and robustness. Thirdly, the formal semantic mathematical framework provided by the DIKWP model enables each cognitive link and its transformation rules to be represented by mathematical structures. This provides a well-defined basis for modeling the information flow of complex tactile systems, facilitates the analysis of the behavior of the system at all layers, and ensures the consistency and explainability of the coordination of each module in the execution of the closed loop.
Based on the above advantages, the DIKWP mesh model has been regarded as a major theoretical innovation in the field of artificial intelligence cognitive computing. The model establishes a unified "cognitive language" within the AI system, so that every step of the AI from perception input to decision output can be traced. In particular, by explicitly incorporating "Purpose" into the model, DIKWP is able to combine subjective goals with objective cognitive processes, providing a new paradigm for the construction of autonomous agents. In this project, the DIKWP model is introduced to guide the interactive modeling of multimodal tactile information, which can give full play to its advantages of structured semantic representation and multi-directional feedback on the cognitive-perceptual-execution closed loop, and lay a theoretical foundation for the construction of a tactile cognitive system similar to the human brain.
1.3 Artificial Consciousness Theory: Empowering Self-Perception, Interaction Purpose Generation and Feedback Regulation
With the development of artificial intelligence, artificial consciousness (AC) has gradually become the frontier exploration direction to improve the advanced cognitive ability of autonomous agents. This project integrates the theory of artificial consciousness into the brain-like tactile system, aiming to endow robots with initial "self-awareness", so as to show a higher level of perceptual understanding and autonomous decision-making ability in self-environment interaction.
The role of artificial consciousness in this study is first reflected in the enhancement of the system's self-environment perception ability. It is difficult for a typical control system to distinguish between changes from the external environment and the impact of its own actions, but after the introduction of the artificial awareness module, the system will establish an internal "self-model", which can monitor the difference between its own state and the external environment in real time. For example, when a mechanical finger is holding an object, the Artificial Awareness module helps the system realize that "applying pressure" is its own behavior and that the "reaction force of the object" comes from the outside, thus interpreting the sensor signal more accurately. This self-awareness helps to improve the accuracy of the interpretation of tactile information and reduce interference caused by one's own movements.
Secondly, artificial consciousness can generate interactive purposes based on internal and external perceptions, and actively guide system behavior, rather than just passively responding to the environment. The tactile system with artificial consciousness will simulate the human purpose generation mechanism: when it detects that the environmental state does not match the target, the artificial consciousness module will trigger the corresponding purpose signal, drive the system to take action to close the gap. For example, when the robot finger senses that the sliding trend of an object exceeds the safety threshold, it can autonomously generate a "firm grip" and adjust the control strategy accordingly to increase the gripping moment. This process of generating a purpose is similar to the instinctive "grasping" of a human being when he perceives that a cup is about to fall, which embodies initiative and foresight. Driven by the purpose of artificial consciousness, the haptic system is able to make more flexible decisions based on the current context, rather than relying on pre-programmed fixed feedback logic.
Third, artificial consciousness provides a high-level feedback regulation mechanism for the system, allowing it to self-monitor, self-reflect, and self-regulate. This can be achieved through the introduction of the "Dual Circulation" architecture (DIKWP ×DIKWP). The so-called dual cycle is to add a metacognitive cycle in addition to the basic cognitive processing process. The basic loop processes the perceptual information from the environment and generates the action output; The metacognitive cycle, on the other hand, continuously monitors the working status and results of the basic cycle and compares them with the desired goal. When deviations occur, metacognitive loops (i.e., artificial consciousness modules) generate reflective signals that regulate the parameters and behaviors of the basic cycles. This architecture is equivalent to giving the system a "mental eye": looking at the environment as well as itself. In this project, we plan to adopt the artificial consciousness system architecture of DIKWP × DIKWP, and use one set of DIKWP cognitive processes for environmental information processing and the other set of DIKWP processes for metacognitive self-monitoring, and the two are nested with each other to form a complete consciousness system. This design has been regarded as an important way to build an AI system with initial self-awareness in relevant research and patents. Through this feedback and control of artificial consciousness, the robot's haptic system can autonomously find problems and correct them during task execution: for example, when it detects that the grasping action does not achieve the expected effect, it autonomously adjusts the strategy; Or evaluate the results and accumulate experience after completing the task to improve the performance of the next execution. This self-regulation capability will significantly improve the safety and robustness of the system's operation, avoiding the accumulation of errors and loss of control.
Finally, from a higher perspective, incorporating artificial consciousness can also help achieve human-machine collaboration and value alignment. With the introduction of the "purpose" layer of the DIKWP model, the decision-making process of the AI system becomes more transparent and controllable, and each step can be explained and traced. "By embedding the key layer of 'purpose' inside the model, we are not only able to make AI smarter, but also ensure that it always serves human values and security needs," said Professor Yucong Duan. This means that tactile systems with artificial consciousness can better understand human purposes when interacting with people, and can behave in line with human expectations and ethical norms, avoiding risky behavior. This is of great strategic significance for the deployment of autonomous robots in a hybrid human-machine environment in the future.
In summary, the introduction of artificial consciousness theory into the brain-inspired multimodal tactile system can improve the intelligence level of robots at multiple levels of perception and understanding, active decision-making and self-regulation, and promote their evolution to a higher form of autonomous agents. This project will take the DIKWP ×DIKWP model as the core framework, give full play to the role of artificial consciousness in self-environment perception, purpose generation and feedback control, and establish a human-like self-awareness and cognitive closed loop for tactile robots.
1.4 Benchmarking strategic value with international frontiers
This study is closely related to the major national demand in the field of brain-inspired perception and intelligence, and has important strategic significance and frontier value. First of all, at the national level, the development of independent and controllable intelligent perception chips and brain-like cognitive systems is the basis for ensuring the safety of China's new generation of artificial intelligence and high-end manufacturing. Under the current international situation, the risk of core AI chips and key algorithms being controlled by others has attracted much attention. This project aims to develop multi-modal tactile perception chips and integrate independent innovation cognitive models and artificial consciousness architectures, which will help fill the gap in the field of intelligent sensors and cognitive processing chips in China and enhance technological autonomy. At the same time, smart robots and embodied intelligence are also one of the key technologies to support the upgrading of high-end manufacturing, aerospace, medical rehabilitation and other industries. Robots with human-like tactile and cognitive abilities can be widely used in precision assembly, hazardous environment operations (such as disaster relief, explosive disposal), medical care, and prosthetics for the disabled, meeting the country's urgent needs in intelligent manufacturing, national defense security, and healthy China. It can be seen that this project has important strategic value for consolidating China's technological advantages in the field of intelligent robots and seizing the commanding heights of the future industry.
From the perspective of international frontiers, the research content of this project is also closely aligned with the latest trends in academia and industry. In terms of multimodal tactile sensing, there have been some international exploration results. For example, as early as the 2010s, the American company SynTouch launched the BioTac bionic tactile sensor, which is the world's first multi-modal haptic device that can simultaneously sense three-dimensional force, temperature and micro-vibration, and can simulate the all-round haptic function of human fingertips. Products such as BioTac have played an important role in the study of robotic haptics, verifying the value of multimodal haptics in object recognition, material discrimination, and force control feedback. In recent years, research teams from various countries have also successively reported the results of multi-functional integrated flexible electronic skin, single-chip multi-modal tactile sensing modules, etc. For example, the Tsinghua University team has developed an ultra-thin and flexible tactile sensor array that integrates sensory functions such as pressure, temperature, slip, and texture, providing service robots with tactile capabilities close to human skin. Another example is the biomimetic electronic skin developed by Stanford University and other institutions, which uses organic flexible materials to achieve multimodal sensing of temperature and stress. It can be seen that multimodal tactile perception technology has become a research hotspot in the field of robotics, and various new materials, new devices and new architectures are constantly emerging.
However, it should be pointed out that most of the current international research focuses on the new sensors themselves, and relatively little attention is paid to the cognitive processing and high-level intelligence utilization after sensing. Although many systems have the tactile perception hardware of humanoids, they are still limited to the traditional algorithm framework in terms of information utilization, and fail to give full play to the value of tactile information. While drawing on the achievements of international advanced sensors, this project will focus on breaking through the key bottleneck of perception-cognition fusion, and propose a new system with DIKWP cognitive model and artificial consciousness as the core. This kind of research idea of integrating advanced sensing hardware and advanced cognitive algorithms is still a cutting-edge exploration in the world, and has obvious innovation leadership.
In addition, in terms of artificial consciousness and cognitive explainability, this project also benchmarks against the highest international level. At present, more and more attention has been paid to the research on "explainable, controllable, and safe" in the global AI field, and how to make AI have a certain degree of self-awareness and understand its own decision-making logic has become a focus topic. Although companies such as OpenAI and DeepMind have made breakthroughs in large models, they have been less involved in artificial awareness and cognitive transparency. In contrast, Chinese scholars have taken the lead in the DIKWP artificial consciousness model, and the relevant patent portfolio of Professor Yucong Duan's team has formed a systematic technical barrier, which is unique in the innovation of AI cognitive structure. This project will make full use of China's theoretical advantages in this field to introduce artificial consciousness into the actual tactile robot system, and realize the first attempt to use artificial consciousness in multi-modal tactile closed-loop control in the world. This can not only enhance China's discourse power in the research direction of artificial consciousness, but also provide a "Chinese solution" for global AI ethics and governance.
In short, this project integrates brain-like cognitive models, artificial consciousness architectures and multi-modal tactile chips, corresponding to major national needs and aiming at the international frontier of science and technology. The results will have a profound impact on the construction of embodied intelligence and brain-like perception system, and provide strong support for China to achieve a leading position in the era of general artificial intelligence.
2. Research objectives and overall technical route
2.1 Research Objectives
The overall goal of this project is to develop a brain-inspired multi-modal tactile chip and perception control system, and to break through the key technologies of multimodal tactile information cognitive processing and autonomous interaction. Specifically, it includes the following four aspects:
Development of a multi-modal tactile sensing chip system integrating DIKWP model: Design and develop a chip-level sensing system that can simultaneously sense multiple tactile information such as pressure, temperature, sliding friction, and material properties. The system needs to have brain-like information preprocessing capabilities, realize basic data fusion and compression at the sensor level, and provide high-quality input for subsequent cognitive processing.
Realize cognitive-driven information compression, modal fusion and feedback control path: Based on the DIKWP cognitive model, a complete information processing link from sensor data to decision output is established. Through the guidance of high-level cognition (knowledge, wisdom, and purpose) to low-level perception, the selective compression and deep integration of multi-modal tactile data are realized, and a closed-loop control path is formed, and the decision-making results are fed back in real time to adjust the sensing and execution.
Design a tactile purpose generation and intelligent interaction platform with the artificial consciousness system as the core: build a software and hardware platform embedded in the artificial consciousness module, so that the system has the ability of self-monitoring and purpose management. The platform can independently generate or adjust the haptic interaction purpose according to environmental changes and task requirements , and coordinate functional modules at all levels to complete complex human-machine/human-computer interaction tasks.
Complete the sensory-cognitive closed-loop demonstration in typical application scenarios: verify and demonstrate the application of the developed tactile perception control system in at least three representative practical scenarios. For example, the industrial precision assembly scenario verifies the robot's ability to use the sense of touch to achieve high-precision assembly; The service robot scenario verifies the role of multimodal haptics in flexible grasping and safe interaction between humans and the environment. The Smart Prosthetic/Rehabilitation scenario validates the effectiveness of the artificial haptic system in improving the fine control and haptic experience of the prosthetic user. Through the above demonstration, the performance indicators and application value of the evaluation system are summarized.
2.2 Overall technical route
In order to achieve the above goals, the project has planned a clear technical route, as shown in the figure below (omitted). The technical route is based on the brain-like cognitive architecture, which organically combines multimodal tactile sensing hardware, cognitive algorithms and artificial consciousness modules to form a complete closed-loop system.
(1) Dual-cycle brain-like architecture design: We first propose a "dual-cycle" brain-like information processing architecture, that is, DIKWP×DIKWP model-driven cognitive system architecture. In this architecture, there are two layers of nested cognitive loops: the basic loop, which is responsible for the processing and feedback control of environmental tactile information, and the metacognitive loop(Artificial awareness) is responsible for monitoring and regulating the basic cycle. Specifically, the basic loop runs according to the DIKWP model process: the data collected by the sensor (D) extracts information (I) through multi-level processing, forms knowledge (K) by combining the knowledge base and context, makes Wisdom judgment (W) in the decision-making unit, and finally forms the output of action instructions by the Purpose module (P). At the same time, the metacognitive cycle also follows the DIKWP structure, but it processes the data inside the basic loop, such as the system's own state, decision-making basis, error signal, etc., abstracts these internal information into higher-level semantics, and then generates meta-purpose by the artificial consciousness module to guide the adjustment of the basic loop. Through the dual-cycle architecture, the high-level purpose can not only directly face the environmental target (the P-layer of the basic cycle), but also act on the system itself through the meta-cycle (the meta-P-layer regulates the basic cycle), so as to realize the adaptive regulation of the internal and external environment. This architecture is considered by relevant research to be a new direction towards autonomous AI, providing an overarching blueprint for our haptic systems.
(2) Multi-modal tactile sensing and pre-processing: At the hardware front-end, the haptic chip developed in this project integrates various types of miniature sensor units to simulate the multi-modal receptors of human skin. These include: a highly sensitive array of pressure sensors for sensing normal force distributions, shear/acceleration sensors for detecting slip and vibration (corresponding to slip), high-resolution thermal sensors for measuring temperature changes and the thermal conductivity of objects (auxiliary material identification), and the necessary humidity/conductivity sensors for detecting material media (e.g. liquid presence or material discrimination). The selection and layout of each sensing unit will refer to the biological structure of human fingertips, and realize the acquisition of rich information such as contact surface deformation, texture, and temperature difference. In order to reduce the processing pressure on the back-end, the front-end of the chip will be fused with neuromorphic pre-processing circuits. For example, memristor arrays or pulse coding circuits can be integrated at the sensor output stage to convert analog signals into pulse trains or incremental information in situ for event-driven data compression. This hardware-layer pre-processing can greatly reduce the amount of redundant data, extract key tactile events, and provide efficient input for subsequent DIKWP layer processing.
(3) Cognitive hierarchy division and information flow: In the core information processing, the system organizes the algorithm modules in strict accordance with the DIKWP hierarchical structure, so that the information is → information → knowledge → Wisdom → Purpose Each layer is refined and fused step by step, and the upper and lower interactions are realized through mesh feedback. The specific process is as follows: the data layer (D) obtains the original or pre-processed haptic data stream from the sensor chip, such as the pulse burst of the sensor node; The information layer (I) denoises and extracts patterns from the data, and extracts meaningful feature descriptions, such as contact strength, temperature curve, vibration spectrum, surface texture features, etc.; The knowledge layer (K) integrates multi-source information, and combines prior knowledge for understanding and reasoning, so as to form an understanding of the current contact state (such as the type of object material, whether it is stable, the friction coefficient and other regular knowledge); According to the output of the knowledge layer, the Wisdom layer (W) comprehensively considers the current task objectives and the environmental context to generate specific decision-making schemes or control strategies (such as judging whether it is necessary to adjust the finger posture, apply more force, or change the contact point, etc.), which reflects the judgment of the strategy layer and the answer to "what is the best". Finally, the Purpose layer (P) evaluates and selects the solutions of the Wisdom layer according to the ultimate goal and value criteria of the task, determines the final execution of the action Purpose, and sends instructions to the executor. At the same time, the Purpose layer will also feed back the high-level goals and expectations to the lower layers, guiding the perception and knowledge acquisition process to align with the overall purpose. This information flow mechanism ensures that each level of the system has a clear basis for decision-making, and accepts the constraints of high-level goals, forming a closed-loop cognitive control chain.
(4) Artificial consciousness module integration: In addition to the above basic cycle, we have designed an artificial consciousness module (metacognitive cycle), which runs through all stages of the information processing process. The artificial consciousness module includes sub-units such as self-state perception, meta-knowledge reasoning, and meta-purpose decision-making, which are used to continuously monitor the internal operation of the system and the compliance with the external environment. For example, it tracks the actual effect of the actuator's completed action, compares it with the desired target, and calculates the error; Monitor compute resource usage and uncertainty levels at each layer to assess the reliability of current knowledge. Once a deviation or anomaly is found, the artificial consciousness module will enter the meta-wisdom judgment process to analyze the cause (is it a perceptual error?). Insufficient knowledge base? Or is it a change in the environment? ), and then generate a tuning strategy at the meta-Purpose layer (such as increasing the sampling rate of a sensor, invoking an alternate knowledge model, or adding redundancy checks to the execution layer). These adjustments act on the underlying loop-related layers through the feedback interface of the meta-→ foundation, optimizing the system behavior in real time. For example, when artificial consciousness detects that haptic sensing may be saturated and distorted, it can order the data layer to enable different gain modes; In addition, when uncertainty in knowledge judgment is detected, the Wisdom layer can be prompted to adopt a more conservative strategy or try a few more times. The deep involvement of this artificial consciousness makes the system have human-like self-reflection and error correction capabilities, and truly realizes the stable control of the global closed-loop.
(5) Interaction between the actuator and the environment: Finally, the action Purpose generated by the above layers of processing is executed by the robot actuator (such as mechanical fingers, grippers, etc.) to complete the operation task in the real environment. During the execution process, the environmental state changes (e.g., object movement, force changes), and these changes are captured by the sensor and fed back to the data layer, starting the next cycle. In this way, a closed-loop control is formed: sensing-cognition-decision-execution-resensing, and the cycle repeats. At the same time, the artificial awareness module works continuously throughout the process, ensuring that each cycle moves towards the overall goal and constantly corrects deviations. Through this technical route, we will achieve seamless connection in the chip hardware layer, cognitive algorithm layer and consciousness decision-making layer, and build a brain-like tactile system with perceptual understanding, independent decision-making and self-regulation capabilities.
All in all, the technical route of this project is guided by the DIKWP×DIKWP dual-cycle model, from the underlying sensor chip to the high-level artificial consciousness platform, layer by layer, layer by layer, layer by layer, and finally form a comprehensive solution integrating software and hardware. This route not only ensures the gradual progress of the research work, but also ensures that the interface between the modules is clear, collaborative and efficient, and provides a reliable path guarantee for achieving the project goals.
3. Core research content
Focusing on the above technical routes and objectives, this project intends to carry out the core research work in the following four aspects:
1. Brain-inspired tactile perception modeling and chip design: Referring to the physiological mechanism of the human somatosensory system, brain-inspired modeling is carried out on the process of tactile perception, and multimodal tactile sensing chips are developed accordingly. The research contents include: analyzing the functional characteristics and distribution rules of various sensing units such as mechanoreceptors and thermoreceptors in human skin, and establishing a conduction and coding model of tactile signals in organisms; On this basis, a bionic sensor design scheme is proposed, such as using flexible polymer materials to simulate the skin surface, integrating micromechanical structures to achieve sensitive response to pressure, shear and vibration, and using MEMS thermopiles to detect temperature and heat flow. It focuses on the multi-modal integration and miniaturization packaging technology of sensors, and solves the compatibility process and crosstalk suppression problems of various sensing components on a unified chip. At the same time, the on-chip primary signal processing circuit is developed, including low-noise amplification, filtering, analog-to-digital hybrid conversion, etc., so that the chip output conforms to the data format required for subsequent cognitive processing. It is expected that through this task, the tape-out manufacturing and testing of the first version of the multi-modal tactile sensing chip can be completed, and the chip should be able to provide rich and reliable haptic raw data, and have certain brain-like preprocessing capabilities (such as real-time frequency analysis of vibration signals, etc.).
2. DIKWP-driven tactile signal cognitive coding system: construct a hierarchical coding and understanding system for tactile signals based on the DIKWP model. This research focuses on solving the problem of information extraction and representation of "sensory → perception → cognition". It mainly includes: research and development of conversion algorithms from the data layer to the information layer, such as event-triggered pulse coding, which converts continuous sensing data into discrete event streams; Using signal processing and machine learning techniques, key features (sudden changes in force, temperature change rate, friction and vibration mode, etc.) are extracted from tactile time series to form a structured information representation. Then, the cognitive coding mechanism from the information layer to the knowledge layer was designed: the knowledge graph or causal graph was explored to represent tactile situational knowledge, and the multimodal information was mapped into semantic labels and relationships (such as "object A - smooth surface and temperature 20°C - stable grip"). This step requires the integration of prior knowledge (thermal conductivity of the material, typical coefficient of friction, etc.) with real-time information to make inferences about the current contact state. Next, the decision support approach from the knowledge layer to the wisdom layer is studied : using means such as inference engines or reinforcement learning, to give the best action suggestions based on the existing knowledge (e.g., judging whether the finger pressure direction needs to be adjusted to increase friction). The whole coding system will operate under the DIKWP framework, and the high-level Purpose can participate in the low-level coding process through a feedback mechanism, for example, using the idea of predictive coding, the Purpose layer will provide the prediction signal, and only encode the error information after comparing with the actual sensor data, so as to greatly reduce the amount of data. Through this task, it is expected to form a complete set of haptic cognitive coding algorithm libraries, so that the system can "understand" tactile information in a human-like way, and closely associate perception and cognition.
3. Multimodal Fusion and Artificial Consciousness Feedback System: Develop tactile multimodal information fusion algorithm and artificial consciousness-driven feedback control system. This task focuses on the spatiotemporal fusion of cross-modal information and the decision-making and regulation of consciousness level. Firstly, the fusion strategies of multi-modal tactile information are studied, including: spatial fusion, which integrates the tactile information of different sensing areas/different fingers to form an overall environment model; Modal Fusion – Analyze pressure, slippery, temperature, and other characteristics to extract higher-level scenario descriptions (e.g., a comprehensive judgment of "an object will fall" based on pressure changes and slip rates). Deep neural networks (such as multimodal fusion Transformers) or neuronal pulse networks can be used to achieve cross-modal feature association and global pattern recognition. Then, the artificial consciousness feedback loop was constructed: on the basis of fusion perception, the artificial consciousness module was introduced to evaluate and regulate the operation of the system. Specifically, several core functions of artificial consciousness are studied: adaptive Purpose management, which dynamically adjusts the output of the Purpose layer according to the current task goal and environmental changes, such as generating a secondary Purpose (such as "re-grabbing") to overwrite the original Purpose when necessary; Anomaly detection and self-regulation: The algorithm is designed to monitor the inconsistency or abnormal pattern between perception and the knowledge layer, and once it is found (such as abnormal spikes in sensor data or knowledge reasoning conflicts), artificial consciousness triggers emergency feedback, and directly corrects the execution command or requests more detailed perception information through the fast channel, so as to avoid the expansion of errors; Meta-learning and memory: Let the artificial awareness module record each decision and result, and continuously update its own understanding of "self-ability" and "environmental law", so as to optimize and adjust the strategy in long-term operation. Through the research of this task, it is expected to realize a control system with multi-modal adaptive fusion and self-regulation capabilities: externally, it can flexibly respond to various complex tactile scenes and make correct responses; Internally, it is able to self-monitor and improve to ensure that the system operates with high reliability.
4. Typical application demonstration design: In order to verify the effectiveness of the above technologies, this task will design three types of typical application demonstration scenarios for actual needs, and formulate detailed experimental plans and evaluation indicators. The proposed demonstration scenarios include, but are not limited to: (A) Industrial parts assembly scenarios: such as fine bearing assembly or electronic component insertion, the robot needs to use tactile perception to align small deviations to achieve high-precision assembly. We will introduce the assembly task of micron-level error into the experiment, and test the speed and final accuracy of the error convergence (the goal is to reach the level of a skilled human technician) under tactile closed-loop control. (B) Service robot grasping scenario: Let the robot grasp daily necessities (fragile or smooth items such as glasses and eggs) in the home environment to verify the multi-modal haptic sense for grip stability and adaptability to different materials. The evaluation indicators include: the types of items that the robot can successfully grasp with the assistance of haptics and the success rate, the breakage rate during the placement process, and the improvement in performance compared with pure visual control. (C) Bionic artificial hand/prosthetic scene: The developed tactile chip and cognitive system were integrated into the finger of the prosthetic limb to test the ability and experience feedback of amputee patients using the prosthesis to perform fine movements (such as picking up a paper cup and pouring water, tying shoes, etc.). The effect of artificial haptic feedback on improving motion accuracy and user perception was emphatically evaluated. In this task, we will design comparative experiments in each scenario, collect quantitative data, comprehensively verify the effectiveness and robustness of the system, and further optimize the system design to meet the practical application needs based on the demonstration results.
The above four parts of the research content are interrelated and gradually deepened: from basic hardware to algorithm model to application verification, forming a complete research system. The results of each part will provide support for the next part, for example, the output of the chip will be fed into the cognitive coding system, and the coding and fusion algorithms will be tested and improved in the application demonstration. Through this series of closely connected research work, this project is expected to make a comprehensive breakthrough in the key technology of "brain-like multimodal tactile perception system" and achieve innovative results with international advanced level.
4. The hardware system and software and hardware collaborative design
This project attaches great importance to the collaborative optimization of hardware system design and software algorithm development, and strives to integrate software and hardware to realize brain-like tactile perception and cognitive functions. In the research, we will make full use of the project team's accumulation in haptic AI chips, cognitive models and artificial consciousness architecture to build a technical solution combining software and hardware.
First of all, in terms of hardware platform, we will develop a dedicated haptic perception SoC chip that integrates a multi-modal sensor array and a primary signal processing unit. This chip not only provides high-quality tactile data acquisition, but also undertakes part of the data preprocessing and encoding functions, reducing the computing burden of the host computer. For example, the pulse coding circuit embedded in the chip can directly output the event stream, eliminating the overhead of traditional A/D sampling. On-chip integrated memory computing units, such as memristor arrays, enable simple pattern matching or threshold detection, with initial information screening done locally. The design of these hardware will directly consider the semantic partitioning requirements of the DIKWP model for data, so that the underlying output can be connected to the high-level algorithm more naturally and efficiently. The project team has a research foundation in AI chip design, and has successfully developed the first batch of tactile perception chips in China and participated in related key projects. We will learn from past experience and use a hybrid digital-analog circuit and a low-power architecture to ensure that the chip meets the real-time and energy consumption requirements. At the same time, through the modular design, the chip has good scalability, providing space for subsequent function upgrades or process iterations.
Secondly, in terms of software systems, we will develop supporting cognitive and control software frameworks, including signal processing, middleware, cognitive reasoning engines, and artificial consciousness core modules. The software design will fully consider the hardware characteristics to realize the decoupling and asynchronous parallel of software and hardware: for example, we develop asynchronous event-driven processing algorithms for the pulse events output by the chip; In view of the possible noise and uncertainty on the chip, we add redundancy checksum and adaptive filtering algorithms to the software layer to improve the robustness of the system. In the cognitive inference engine implementation, we will leverage high-performance computing libraries and possibly AI-accelerated hardware (e.g., GPUs, NPUs) to run deep learning models or complex knowledge inference. At the same time, these calculations are encapsulated as services through middleware for the upper-level artificial consciousness module to call and coordinate. The artificial consciousness module itself acts as the "brain" of the software, coordinating the operation of all parts: we will realize the prototype of a cognitive operating system, encapsulating the five layers of functions of the DIKWP model into a process that can be monitored, and making the internal decision-making process of the system transparent. In this way, developers and debugging tools can monitor the status of data, information, knowledge, wisdom, and purpose within the AI just like monitoring the operating system, providing convenience for system optimization and security supervision.
In the process of hardware and software co-design, we pay special attention to the coordination mechanism between circuit implementation, data fusion, chip control and cognitive feedback. For example, to support high-speed feedback from artificial consciousness, we will reserve an emergency interrupt path in the hardware, so that high-priority awareness commands can bypass the conventional control chain and act directly on the actuator, reducing latency. For another example, in order to achieve the deep integration of multimodality, we consider sinking part of the fusion computing to the hardware, such as using reconfigurable logic to implement simple multimodal correlation calculation on the chip, so as to reduce the pressure of bus data transmission. Another example is the introduction of configurable parameters (gain, threshold, etc.) at the circuit level, and the opening of interfaces for high-level software adjustment, so that artificial consciousness can "tune" the sensor and circuit status in real time as needed, and realize the self-adaptation of software and hardware linkage.
The research accumulation of the project team in related fields will strongly support the above-mentioned software and hardware collaborative design. Professor Yucong Duan's team has been engaged in the research of cognitive computing and artificial intelligence for a long time, and took the lead in proposing the DIKWP model and won the Wu Wenjun Artificial Intelligence Award and other honors. In recent years, the team has published a large number of achievements in the direction of knowledge representation, semantic computing, and artificial consciousness evaluation around the DIKWP model, and applied for 114 related invention patents, building a complete DIKWP technology system. These patented technologies cover many fields from large model training, cognitive operating systems to AI governance, providing a rich reserve of algorithms and frameworks for this project. In terms of hardware, the project team members maintain close cooperation with the industry, have an in-depth understanding of the industry status of tactile sensors and smart chips, and have participated in national chip research and development projects, with practical experience from design to tape-out. The team includes researchers who are good at electronic engineering and circuit design, as well as software talents who are good at artificial intelligence and cognitive science, and has achieved interdisciplinary integration. This people's and knowledge will ensure that hardware and software are fully coordinated from the start of the design, avoiding silos and improving development efficiency and system performance.
To sum up, the project will carry out system development under the guidance of the idea of software and hardware synergy. Through hardware empowerment software and software optimization hardware, a tightly coupled brain-like tactile perception system is created. This collaborative design will greatly improve the real-time, reliability, and intelligence of the system, and also lay the foundation for future engineering results.
5. Phased achievements and assessment indicators
The project is expected to produce the following outcomes in phases over the research cycle, measured by clear indicators:
Phase 1 (beginning of the year to the end of the year): haptic chip development and basic performance verification. Achievements include: the first generation of multi-modal tactile perception chip samples, and supporting signal acquisition systems. Assessment indicators: the chip integrates no less than 4 kinds of tactile sensing units (pressure, temperature, slip, material, etc.), and the performance of each sensor meets the design requirements (such as the force range of the pressure sensor is 0~20N, the sensitivity is ≤ 0.1N, the temperature sensor is 0~100°C, the resolution is ≤0.1°C, etc.); The data output bandwidth of the chip meets the update rate of ≥1kHz full sensor array; The chip consumes less than 50 mW (typical mode of operation). Experiments show that each sensing channel of the chip functions normally, and can obtain multi-modal signals at the same time without interfering with each other, which preliminarily verifies the feasibility of multi-modal sensing.
Phase 2: Cognitive coding algorithm and DIKWP model implementation. The results include: a DIKWP-based tactile cognitive coding software prototype, covering data→ information → knowledge→ Wisdom's processing module, and a basic Purpose decision-making module. Assessment indicators: The algorithm is tested on standard datasets and simulated environments to achieve effective compression and recognition of multi-modal tactile data. For example, in the haptic scene built in the laboratory, the system can compress the original sensing data by at least 90% without losing key event information. The recognition accuracy of common objects (such as metal, wood, plastic) is ≥ 95%; The detection delay of slide/grip state changes ≤ 10ms for timely feedback. Each layer of the DIKWP model is functionally complete, and the five-layer semantic mapping is realized, that is, manual inspection can confirm that a multi-level representation of the tactile scene has been formed inside the system. At this stage, the initial construction of the knowledge base and rule base (covering no less than 50 haptics-related knowledge rules) should also be completed.
Phase 3: Multimodal Fusion and Artificial Consciousness Integration Platform. The achievements include: building a tactile perception and control platform integrating software and hardware, integrating multi-modal fusion algorithms and artificial awareness modules. Assessment indicators: The platform realizes the closed-loop control of real robot devices, and can run stably for at least 1 hour without interruption. The sub-functions of the artificial awareness module are validated to detect anomalies and successfully trigger adjustments. For example, if a sensor failure is artificially created during the test, the artificial consciousness should detect it and switch to degraded mode within 100ms; Faced with new objects that are beyond the scope of the knowledge base, artificial consciousness can instruct the system to take exploratory actions to obtain new information. In terms of fusion performance, the system has good recognition and control of comprehensive scenarios (such as objects of different weights and temperatures): for example, the accuracy rate of classifying objects with 8 different attributes is not less than 90%. In addition, the platform provides a user-friendly monitoring interface, which can display the five-layer status of DIKWP and the purpose decision-making process in real time, so as to realize the interpretable presentation of the internal processes of the system.
Phase 4: Demonstration and evaluation of typical application scenarios. The results include: completing the system demonstration application of three typical scenarios before the end of the reporting period, and forming a detailed test report. Assessment indicators: The task success rate and performance indicators of each scenario meet the expected requirements. For example, in the industrial assembly scenario, the system successfully completes at least 95 of the 100 precision plugging tasks, and the average positioning accuracy reaches within 0.1mm, which is better than the 1mm level under pure visual control, and in the service robot scenario, the success rate of grasping and placing 10 kinds of daily items is ≥ 90%, and the fragile items are not damaged. Prosthetic application scenarios: Participants in the test can master the use of tactile prostheses to complete at least 3 daily delicate operations within one week of training, with a success rate of ≥ 80%, and the subjective feedback tactile realism reaches the level of "close to real feel". Through these scenario test data, it is proved that the system developed in this project has practicability and reliability in the actual environment. At the same time, for the problems exposed in each scenario, we will put forward improvement plans in the report to provide a basis for follow-up research or productization.
Stage 5: Theoretical model and standard output of results. In addition to physical and experimental indicators, the project will also produce important theoretical and standardized results. Including: the mathematical definition and proof of the DIKWP cognitive feedback model, the design specification of the artificial consciousness interaction architecture, and the corresponding software interface specification. Assessment indicators: Publish no less than 5 high-level papers (including more than 3 papers included in EI/SCI) during the implementation of the project, apply for no less than 5 national invention patents, and take the lead in formulating or participating in the formulation of more than 1 standard/specification in related fields. These results will make the impact of the project not only limited to engineering realization, but also have guiding significance for academia and industry.
In summary, this project ensures that the research work is carried out in an orderly manner as planned through clear phases and quantitative indicators, and outputs measurable and evaluable results. These phased results will jointly verify the degree of achievement of the project objectives and support the final acceptance.
6. Feasibility analysis and transformation plan
6.1 Feasibility Analysis: Research Basis and Team Advantages
This project has a solid research foundation and excellent team support to ensure that the realization of the goal is fully feasible.
First of all, in terms of scientific research conditions and team capabilities, the project leader, Professor Yucong Duan and his team, have long-term and profound accumulation in related fields. Professor Duan's team took the lead in proposing the DIKWP artificial consciousness model, which enjoys a high reputation in the field of artificial intelligence cognitive computing at home and abroad. As early as 2020, the team's achievements based on the DIKWP theory won the Wu Wenjun Artificial Intelligence Science and Technology Award, reflecting the innovation and leadership of its research work. So far, the team has published a series of high-level papers around the DIKWP model, applied for hundreds of invention patents, and built a complete technical system covering cognitive models, evaluation methods, operating system frameworks and other aspects. These studies not only establish the unique advantages of this project in terms of theoretical methodology, but also show the team's ability to translate cutting-edge ideas into concrete technologies. In particular, it is worth mentioning that the team's patent portfolio has formed a systematic layout in the field of cognitive structure and artificial consciousness, and has strong competitiveness and negotiation power compared with international giants. This means that our research solutions are globally leading and original, avoiding duplication of competition with others and providing a favorable IP environment for the development of projects.
The team also has a wealth of experience and resources in terms of hardware and engineering implementation. The backbone members of the project include engineers who are proficient in chip design and sensor technology, as well as software experts who are familiar with robot system integration and control, which truly realizes the talent allocation of software and hardware. The team has participated in a number of national scientific research tasks, and has successful cases in brain-inspired chips, circuit design, and embedded system development. For example, the haptic perception chip developed by the team members in cooperation with a company was approved and successfully developed in 2019, becoming the world's first digital-analog hybrid AI haptic chip, which provides valuable experience reference for this project. At the same time, we maintain cooperative relations with top universities and scientific research institutions in China (such as the tactile team of Tsinghua University, the brain-like chip team of Fudan University, etc.), and can share experimental conditions and test platforms. The existing laboratory has perfect equipment conditions such as micro-nano processing, chip testing, robot control, etc., which can fully support a series of workflows from chip tape-out, packaging and testing to system construction and commissioning. Therefore, the project team has sufficient strength to ensure the smooth progress of the research in terms of theoretical innovation and engineering implementation.
Secondly, from the perspective of the feasibility of the technical route, each key link of the project is supported by the preliminary foundation, and has been verified and feasible to a certain extent. As the methodological core of the project, the DIKWP model has been used to explain brain cognitive processes in previous studies with success. This shows that the use of DIKWP for complex information processing is theoretically feasible, and we are only applying it to the field of haptics for the first time, which is groundbreaking but not out of nowhere. In terms of tactile chips, there have been similar multi-modal sensor development achievements at home and abroad, such as the aforementioned BioTac sensor and the tactile chip of Tashan Technology. These results show that multi-modal haptic sensing hardware can be fully realized, and this project will further improve the level of integration and intelligence on this basis. In addition, the work of Professor Liu Qi's team at Fudan University has proved the feasibility of multimodal neuromorphic perception: they have realized the fusion pulse output of temperature and pressure signals through a pressure sensor + memristor, which has been successfully used for multimodal object classification. It is worth noting that this work has also been supported by the National Key R&D Program, which shows that the technical direction we aim at is in line with the layout of national scientific and technological development, and the research ideas are practical and recognized.
Finally, we have also given full consideration to risk analysis and countermeasures. The challenges that may be encountered in the project include: the process complexity of multimodal integration of chips, the performance bottleneck of cognitive model software and hardware implementation, and the uncertainty of artificial decision-making. In this regard, we have formulated a plan: in the development of chips, we adopt a gradual iterative strategy, first do local functional verification, and then gradually expand the modal types, so as to avoid debugging difficulties caused by too many integrations at one time; In terms of software performance, by analyzing the computational amount of key algorithms, we make full use of parallel computing and hardware acceleration (trimming the model if necessary) to ensure that the real-time performance is up to standard. On the human consciousness module, a safety guardian mechanism is introduced, and the system is downgraded to the traditional control mode when the output of artificial consciousness is abnormal, so as to ensure safety and stability. Overall, these measures will minimize the technical and engineering risks of the project.
In view of the above analysis, it can be assured that the implementation of this project is highly feasible. No matter from the perspective of team, foundation, technology, and risk control, we are ready to overcome the cutting-edge topic of "brain-like multi-modal tactile chip and perception control system".
6.2 Achievement Transformation Plan: Application Prospects and Promotion Strategies
The expected outcomes of this project are of great value at both academic and industrial levels, and we have developed a clear strategy for translation and promotion to ensure that the results are implemented.
In terms of academic value, this project will fill the research gap in the field of artificial intelligence in the direction of the combination of multimodal tactile cognition and artificial consciousness, and it is expected that a series of theoretical and technical achievements (such as the application paradigm of DIKWP cognitive framework in robot hapticism, the implementation of artificial consciousness dual-cycle architecture, etc.) can be published in top international journals and conferences, so as to improve China's academic influence in this field. At the same time, we plan to actively participate in the development of standards and academic organizations in related fields. Relying on international platforms such as the World Artificial Awareness Conference (WCAC) hosted by the project team members, we will actively promote the DIKWP artificial awareness model and tactile intelligence technology, strive to incorporate our research results into part of the international AI standard framework, and enhance China's voice in global AI governance and standard formulation.
In terms of industrial application, the market prospect of the results of this project is very broad. At present, a new generation of large models + embodied intelligence is emerging around the world, and major companies have an urgent need for technology to give robots similar to human perception and awareness. In particular, the advancement of humanoid robots represented by Tesla has made tactile sensors quickly become a hot spot in the industry. According to industry predictions, haptic intelligent components are expected to become one of the standard components of robots in the future, and its potential market scale is huge. Our multimodal tactile chips and cognitive systems have been developed in response to this trend and have a clear first-mover advantage. In order to accelerate the industrialization process, we plan to adopt the following strategies:
Industry-university-research cooperation: In the implementation of the project, we will carry out cooperative research and development with powerful enterprises. At present, a well-known domestic robot company and chip company have been preliminarily contacted, and the other party has expressed strong interest. In the pilot stage, we will work with enterprises to optimize the chip process and improve the system integration, so as to make the results more in line with the needs of the industry. Cooperative enterprises can provide engineering experience and production resources, and we will export core technologies, complement each other's advantages, and accelerate the speed of transformation.
Patent layout and licensing: For the core technology of the project, we have carried out the patent layout in advance to form a patent pool protection. Once the project is completed, we will export the patent results to robot manufacturers and prosthetic companies through licensing or technology shareholding, etc., to obtain commercial value returns. At the same time, we plan to adopt an open licensing or free donation policy for some basic patents to promote the establishment of industry standards and the development of the overall technology ecosystem. This strategy helps our technology become a universal approach that is widely adopted, resulting in a de facto standard for impact.
Incubation and productization: According to the progress of the project, we will prepare for the establishment of a special high-tech start-up company or incubation product line within the project team according to the situation, focusing on bringing multi-modal tactile chips and artificial consciousness control systems to the market. Leverage the team's leadership in intellectual property and technology to win venture capital and create product solutions from chips to complete machines. For example, the introduction of intelligent haptic modules for industrial robot end effectors, or intelligent sensing kits for the prosthetic market. Once it takes the lead in the market, it is expected to bring considerable economic benefits. It is estimated that in the field of intelligent manufacturing and service robots, every little improvement in accuracy and reliability may bring billions of new markets, and our technology is expected to create new industry growth points.
Application demonstration and promotion: We will cooperate with user units to demonstrate and display applications in the later stage of the project, and publicize the project results through industry seminars, expos and other channels. For example, we invite a manufacturing leader to observe the haptic upcycling of our assembly robots, or work with a medical institution to demonstrate feedback from clinical trials of smart prosthetics. These success stories will greatly enhance the market recognition of our technology. With seed users and word-of-mouth, we will further expand our influence through media coverage, white paper releases, etc., to attract more potential customers.
To sum up, the complete chain of "research-development-application" was considered at the beginning of the project, which not only pursues scientific breakthroughs but also meets the needs of the industry. Through the above careful transformation plan, we are confident that the project results can achieve real application and commercial value while serving the major strategic needs of the country, and contribute to the development of China's artificial intelligence and robotics industry.
In summary, "Research on Brain-like Multimodal Tactile Chip and Perception Control System Based on DIKWP Model and Artificial Consciousness" has important theoretical significance and application prospects. The project responds to national guidelines in a contextual sense and captures the key scientific issues of brain-inspired tactile perception; In terms of technical route, the DIKWP cognitive model and artificial consciousness architecture are integrated, and the scheme is novel and feasible. The research content is clear and focused; Obvious advantages and solid foundation in team conditions; In terms of expected results, the indicators are clear, evaluable and measurable; In terms of transformation prospects, the path is clear and the potential is huge. We will make every effort to promote the implementation of the project with a scientific, rigorous, realistic and innovative attitude, strive for breakthrough results, promote the new leapfrog development of China's intelligent perception and artificial consciousness technology, and contribute to the construction of a scientific and technological power.