FEMM Hub Logo

In-process tracking and traceability for zero-defect manufacture of electrical machines

31 August 2020

Words by Dr Michael Farnsworth from the University of Sheffield.

The manufacture of electrical machines is a complex task, involving the synthesis of many traditional machining/forming and cutting processes interspersed with assembly, integration and test. Across this manufacturing lifecycle there exists input from skilled manual processes that ultimately impact the quality of the end product and its performance over the duration of its operation. 

It has been our aim to understand these activities and their variations, and how it is possible to digitise these processes so as to allow in-process tracking and traceability for defect detection, mitigation and certification at the point of manufacture. One key component of this has been our engagement with industrial partners of the Hub and external OEM and electrical machine (EM) manufacturers. 

Over the year, we have surveyed a number of companies on their practices in EM manufacture, with a particular focus on the degree of automation and manual activities that occur. A list of responses can be seen in Table 1 for four recently surveyed companies. The feedback and discussions from these companies has provided us with feedback on specific areas of EM manufacture to target, possible future use cases, existing inspection technologies within these companies, and gaps that need to be addressed.

Table 1 (best viewed on larger screen sizes): Industry perspective on involvement of manual and automated activities in the following EM manufacturing processes provided by four companies (A, B, C and D).

EM manufacture digitisation use cases

Taking on board feedback from Hub members, literature and surveyed companies a number of use cases inspired by industry and shop-floor activities were developed. These focused on activities surrounding steps within the manufacture and assembly of coils and windings, particularly where human interaction is required.

Using standard and 3D depth imaging sensors in a laboratory test bed (see figure 1), along with motion tracking technology we have looked to build a database of captured process data for a set of tasks (cable and end termination assembly, coil winding, trickle fed wire assembly) or manipulation activities (cable guidance and positioning, wire twisting or bending). The outcome of this experimentation was used to investigate how machine vision, in particular the use of deep neural networks (DNNs) could aid in the tracking and traceability of these processes.

Figure 1: Laboratory-based setup for coils and winding experimentations using RGB and 3D depth imaging sensors.

Coil winding failure and in-process monitoring

Recent advances in machine vision, particularly through the use of deep neural networks (DNNs) provide for state of the art capabilities in the classification of anomalies and defects within coils and windings. Our approach was to use current convolutional neural network (CNN) architectures such as ResNet50, trained on known image databases such as ImageNet and then through a technique called transfer learning, repurposed for the identification of coil failures on our use case dataset. 

Through the 3D printing of a stator core tooth, a number of coils were wound and imaged (see figure 2), each one tied to a specific set of characteristics (pass, failed-gap, failed-crossover), and used to train our neural network for classification of errors. The overall success rate of this approach was a test accuracy for classification of errors of roughly above 90%, and now attempts are moving beyond this to investigate coil failures from real use cases with our industrial partners.

Figure 2: Example coils generated for image dataset, including failed winding on the right.

Cables and connections

Another area of research is in how we can capture and model the spatial and temporal history of cables and wires as they are being manipulated and interacted with by a human operator. This is extremely difficult to do, as it involves the capture and digitisation of many separate but important factors. This includes the cable itself, which must be extracted from an image and later modelled in 3D, frame by frame, if the end goal is real-time process monitoring.

The interaction of operators hands and tools must also be captured, and modelled, so their forces and interactions with the cable or wire are also taken into account. However, once achieved it offers a powerful approach to understanding human-workpiece interaction at this stage of manufacture, providing for in-process tracking and traceability, feedback and guidance. 

Our early experimentation has taken us to two main methods for cable extraction from a natural image or scene. Using a developed demonstrator for cable manipulation we have built up an image and video dataset which contains many simple and more complex examples of cable manipulation. 

Using this, our first attempt uses a more engineered approach where each frame is captured and broken down into specific ‘SuperPixels’; clusters of pixels that represent similar shape, colour or texture. Once achieved, a graphing algorithm looks to connect similar nodes, in a chain like fashion, until a complete wire or cable is constructed and extracted from the scene, as shown in figure 3.

Figure 3: Method one - images are broken down into their constituent superpixels (left) and used to extract cables from the scene (middle), extracted cable to target position and overlap score (right). 

One of the disadvantages of the SuperPixel method is that it requires an engineered approach, parameterised to best identify shapes, colours and textures associated with the targeted cables or wires we wish to extract. Ideally we would like to remove this requirement, and one such approach which requires less need for feature engineering, once again can be found in DNNs. 

Image segmentation is the identification and classification of individual pixels in an image for a particular object or class. In our case, this would be the segmentation of pixels associated with cables or wires, preferably in a generalisable sense, ie any size, shape, colour or texture. A DNN architecture called U-NET is one such approach to achieve this. 

Trained at first using an available but small dataset of cable images and their corresponding ground truths, the network is capable of segmenting out the cable when applied to a real scene of cable manipulation, however false positives arise across edges in the scene as well, as seen in figure 4.

Figure 4: U-NET image segmentation, training example (a), ground truth (b), pixel segmentation prediction (c) and predictions on a real scene from cable manipulation demonstrator (d).

Process parameter and tool wear monitoring 

The focus of this work is upon coil winding and the process variation that occurs, for example from the spool supplied wire as the coil is being wound. Encompassing research from the previous examples we are investigating how to capture and digitise wire as it leaves the spool, and track its shape, and characterise its quality before it is wound on the bobbin or stator tooth. 

This also includes the tracking of additional parameters (wire tension, resistance, spool feed speed etc) that may impact the process. This is a challenging task, as it involves the capture of image and process data at high speeds, and equally requires fast inference if some degree of in-process control is to be achieved. 

An early test bed has been developed to allow us to capture wire that is fed and wound onto a bobbin, providing us a simple but useful image and video dataset for training our classification algorithms. This task will look into how to track the shape as the wire leaves the spool and its impact on the final coil and packing factor, and also characterise the enamelled copper wire for any degradation or insulation failure.

Figure 5: Spool-fed wire wound around a simple bobbin, showing wire kinks (a), twists (b) and insulation failure (c).

Exploring generative models for EM manufacture

One of the most fascinating fields of research and application to come out of the recent explosion of deep neural networks has been that of ‘generative models’, particularly that of ‘generative adversarial networks’ or GANs. 

Where discriminative models have shown much success, particularly when it comes to machine vision and the ability to classify or detect objects, GANs have shown capabilities for the creativity and generation of new information or data from a wide range of problem domains. Probably the most cited examples come from the fields of imaging, whether it is the generation of new images (faces, buildings, scenes, paintings), style transfer (Monet to Picasso), image scaling, or text-to-image generation. 

One of the challenges faced when looking to apply for example discriminative DNN models is that they often require large sets of data to train, and when it comes to manufacturing, and specific failure modes, this can be a limiting factor. However, we are exploring how GANs can be used to take smaller sets of examples, more typical to what can be found in most EM manufacturing companies, and build generative models that can create more examples for training future discriminative DNNs for in-process tracking and traceability. 

In figure 6, a pre-trained GAN is repurposed and trained to create a generative model for a new domain, in the first instance we show the images generated as a model is built to output novel butterfly designs, and its evolution in terms of accuracy. In the last example, an image of a separate model for generating new coil failure images is shown.

Figure 6: On the left, the generated images of a GAN as it is retrained for producing new images from a butterfly dataset. On the right, a final generated example from a coil failure GAN model.

For more information about this work-package please contact Dr Michael Farnsworth: m.j.farnsworth@sheffield.ac.uk