Digital Twins - Techniques, Systems, Components
Digital twins are a backbone of Industry 4.0. With the aid of modelling software, we can now create digital replicas of objects, parts, components, buildings or entire cities. By running a detailed analysis and simulations we can make various possible adjustments, outcomes and changes in their blueprint.
As representations of real-world objects, they are constantly updated with data based on the physical entity and its environment. That goes for material characteristics, shape complexities, functionalities, tolerances and many other categories throughout the entire process of development of the digital twin (DT). The important aspect that distinguishes this process from the traditional one is an ongoing real-time adjustment and updating. Traditionally, we would be dealing with a linear sequence of events, such as the design phase preceding construction phase and manufacturing. But with DT, the design outcome can be optimized in the way that is not possible in the traditional practice. By adjusting the digital twins in real time, we can remove these issues and gaps:
• Gap between design and construction → not being able to assess if the design was correct until it is built
• Prediction of quality before inspection → not being able to inspect if a desired quality was reached before the quality inspection phase or till the final design is put in use
• Improvement between batches in construction and manufacturing → not being able to improve the quality of the next batch during the construction phase without returning to the design phase or waiting for the inspection phase
• Fluctuation → lack of real-time information causing deviations and anomalies throughout the process
These are just a few examples, but imagine an upgrade to the traditional industrial automation – without linear sequencing – and an upgrade to simultaneous sequencing.
The Digital Twin Technique
Digital twins were ‘fathered’ by Michael Grieves, who was the first to propose a concept for industrial PLM (project lifecycle management) upgrade in 2003. The proposal quickly gained attention due to far-reaching potential, which has been successfully implemented over the last two decades. In his work he described three parts of the DT technique:
— Physical entity
— Virtual entity
— Data connection between them
Digital twinning is multi-physics, multi-scale, probabilistic complex system simulation that captures everything there is about a physical twin. It creates a mirror effect that actually reflects their states and turns them into data sets. These are used as service data in integrated simulations.
The application is multidisciplinary and each individual one may define DT accordingly. And even though there is no unified perspective, there are few axiomatic points that can be summarized. More specifically, if these criteria are met, we are dealing with a DT process:
DT system components – consists of physical entities and their virtual models, physica-digital data transfer system.
Bi-directional interactive connections – the physical asset communicates with the digital replica via AI operated by humans.
Synchronized updating – the DTs are continuously updated in a real-time manner according to the state of the physical twin, ensuring a bandwidth of consistent reliable representation to optimize, control and schedule tasks.
Lifecycle completion – dynamics models driven by engines follow an evolutionary path from ‘cradle to grave’, so all stages from design and prototyping to disposal.
Mirroring, Shadowing and Threading
The process of handling DTs is being continuously updated to integrate new key technologies, that is to improve three crucial aspects: mirroring, shadowing and threading.
Mirroring stands for the capacity to generate a virtual duplicate. It highlights how a physical object is transferred into an offline version. A digital reflection, be it accurate or not, transfers all associated information in the digital domain. For manual work, a range of computer-aided design (CAD) tools are available for the replication process. There is no general agreement when it comes to the standards of reproduction. As far as automation systems are concerned, there are tools such as point cloud synthesis, tomography and ultrasonic testing at disposal.
Shadowing is the ability to replicate in the virtual world what takes place in reality, creating synchronization between physical and digital counterparts. By always understanding any changes made to physical entities, shadowing can be seen as making a digital reflection with real-time updates from its physical source. This is accomplished through technologies such as model matching, data association, and reinforcement learning.
Threading enables the connection of different operation stages and DT instances to form digital threads, allowing for information islands to be linked. Subsystems can then receive real-time updates from upstream or downstream processes and machines can query data from DTs. In addition, contextual data is able to be transmitted through the thread to enable a plant-wide monitoring and control center to collect more detailed information. Finally, it makes fleet learning possible by allocating computing power to cloud servers connected through the digital thread.
Boosting DT Systems
To realize a higher version of DT, these technologies can be utilized:
Extended Reality
Extended reality technologies such as augmented reality (AR), virtual reality (VR), and mixed reality (MR) provide new types of human-machine interfaces with smoother, more coherent interactions. Through 3D visualization, multi-sensory fusion and body-field sensors, a sense of presence is brought to the operators/managers. This pairs physical asset and digital twin in an intuitive manner, while also capturing the human participant's intentions to enable a human-in-the-loop design. Furthermore, experienced experts can have easy access to the digital twins from afar. To apply extended reality effectively, potent information infrastructure and technology are needed for the terminal devices (e.g., smartphones, glasses, headsets) as well as adequate bandwidth for real-time communication. Web services such as WebAR have become popular solutions for platform independence within a web browser.
Smart sensing and intelligent perception
Sensors measure the external environment and convert the data into useful information for making decisions. Smart sensing and intelligent perception systems are different from devices that are driven by energy transformation principles. They take multiple signals as inputs to provide more reliable results with the aid of data integration and adaptive learning. This increases the ability to account for external nuances and gives decision-making systems a better understanding of the context. The various approaches used can be categorized as those based on machine vision, multivariate analysis, and heterogeneous data fusion.
Data-driven modeling
Building physics or principle models in complex scenarios can be challenging and they're needed to fuel multi-physics simulation engines such as ANSYS. An abundance of process data is generated and collected during operation which includes log data, sensor measurements, camera images, and videos. Data-driven or data-model integrated approaches are a way forward for constructing and capitalizing on Digital Twins (DTs). It is necessary to collect abstract knowledge from big and heterogeneous datasets, making the transition from hypothesis-driven solutions which were based on small-data to now using machine learning based models on the larger, big-data set. These models have the capability of being highly accurate with inherited nonlinearity or coupling while opening up applications in the DT framework.
Machine vision
In recent years, machine vision-based approaches have been highly sought-after for various purposes such as object detection and tracking, virtual measurement, and online inspection. Deep neural networks, like GoogLeNet, ResNet and Restricted Boltzmann Machine have been constructed to provide superior performance. Furthermore, machine vision techniques can be utilized to create human-in-the-loop designs such as gesture identification and eyeball tracking systems, allowing more flexible alternatives than the traditional control sticks or panels.
Database technique
Database techniques are vital for the successful recording and storage of structured data, allowing it to be quickly indexed, accessed, and retrieved. The data generated from physical/digital twins cannot be stored or analysed locally. Instead, historical and fleet data are stored in designated database servers - containers for the virtual assets - that ensure a secure environment. Such development of database technologies will enable a secure and efficient digital twin ecosystem of large volume. Database technologies can also advantageously be combined with data mining and machine learning to provide intelligent and predictive capabilities used in fleet learning and business analysis.
Data Mining
Data mining techniques allow for the correlation of different kinds of data and aim to uncover the factors causing variance in datasets. This includes production-related information, statistics associated with marketing and sales, and thirdly external components such as seasonal occurrences, regional sentiments and political happenings that have an effect on revenue generation and costs. Being able to link the stages of design, production, and marketing is immensely beneficial in the development of data technology.
Cloud computing
Realizing digital twins often involves heavy computing which is often difficult to anticipate. This makes it hard for startups to commit to purchasing expensive computation resources or maintain their own IT staff. That's where cloud computing options prove useful: with cloud computing services, users can pay for access over the internet on a "pay-as-you-go" basis. Not only does this ensure the scalability of the system in terms of compute power, but it also minimizes risk of server downtime compared to onsite hosting, thanks to optimization and comprehensive cooling/anti-dust techniques that guarantee almost 100% uptime. Lastly, value-added services enabled by DTs can be securely provided remotely with reliable data recovery and software updates via the cloud.
Edge computing
Machine vision-based implementations of DTs offer value-added advantages, which require substantial computing power for real-time video/image processing. Fortunately, cloud computing removes the issue of computational constraints, but high application workload can overburden available network resources (exceeding the capacity of the Internet), causing delays in time-sensitive applications. To mitigate this problem, edge computing facilitates local devices to process data immediately and natively rather than solely relying on cloud computing. Edge computing solutions further enhance the capabilities of AI chips. By executing training in a centralized cloud server and deploying a trained neural network copy to a camera connected to an adjacent edge computing server, deep neural networks can be efficiently deployed.
Final Thoughts
It is expected that over the coming years, more businesses will join the DT bandwagon, benefiting from its cost-savings and revenue-generating capabilities as it is set to become a pivotal technology in the Industry 4.0 era. As such, digital twinning offers great potential for creating new high value services - an exciting journey which we cannot turn back from.
Title gif credit: Alastair Gray