Warning: fopen(/home/virtual/kwjs/journal/upload/ip_log/ip_log_2024-04.txt): failed to open stream: Permission denied in /home/virtual/lib/view_data.php on line 88 Warning: fwrite() expects parameter 1 to be resource, boolean given in /home/virtual/lib/view_data.php on line 89 Spectrogram Based Detection Algorithm for Back-Bead in Gas Metal Arc Welding Process using Convolution Neural Network

J Weld Join > Volume 39(2); 2021 > Article
Jin, Park, and Rhee: Spectrogram Based Detection Algorithm for Back-Bead in Gas Metal Arc Welding Process using Convolution Neural Network

Abstract

An automated welding system is essential to ensure a stable and good welding quality and improve productivity in the gas metal arc welding (GMAW) process. Therefore, various studies have been conducted on the establishment of smart factories and the demand for good weldability in the fields of production and manufacturing. In shipbuilding welding and pipe welding, the uniformly generated back-bead is an important criterion for judging the mechanical properties and weldability of the welded structure, and is also an important factor that enables the realization of an automated welding system. Therefore, in this study, the welding current signal measured in real-time in the GMAW process was pre-processed by a short time Fourier transform (STFT) to obtain a time-frequency domain feature image (spectrogram). Based on this, a back-bead generation detection algorithm was developed. To accelerate the training speed of the proposed convolution neural network (CNN) model, we used non-saturating neurons and a highly efficient GPU implementation of the convolution operation. As a result of applying the proposed detection model to actual welding process, the detection accuracies with and without the back-bead regions were 95.8% and 94.2%, respectively, which confirmed the excellent classification performance for back-bead generation.

1. Introduction

Gas metal arc welding (GMAW) is a welding method that creates an arc between the consumable electrode and the base material to cause melting. It is widely used in various industries such as ship building and automobile manufacturing, and is suitable for automatic welding1). However, since the welding quality in the arc welding processes depends on the skill of the operator, there is a limit to ensuring good welding quality and improving productivity. To solve the inevitable working environment problems in arc welding process such as harmful gas, dust, and strong arc, research on the construction of an automated welding system using a robot or other welding equipment has been actively conducted recently2-7).
The back-bead refers to a weld bead formed on the back side of the weld joint. Back-bead formation is considered as one of the important factors in determining the mechanical properties and weldability of the welded structure8). The shape of a back-bead also varies greatly depending on the welding motion, and sometimes a back-bead may not be formed even though the welding is performed under the same welding conditions. Therefore, in order to ensure good welding quality, it is necessary to develop a technology that monitors the back-bead formed on the backside of the welds in real-time.
Many studies have been conducted to predict a good weld bead and to improve the quality of the weld in the GMAW process. The GMAW process variables were analyzed and optimized using various methods such as mathematical statistical experiment design, existing welding process improvement, linear regression model, and artificial neural network. Lee used multiple regression modeling for controlling welding process variables to obtain the desired back-bead shape, and reversed this to develop a process variable prediction system that could be used in real-time automated welding9). Jeong modeled the welding process to check the relationship between the welding variable and the bead shape, and presented a method to obtain the optimal back-bead shape by back-propagation using the welding process variable as an input value10). Lee proposed a method to predict the width and depth of the back-bead using an artificial neural network when four welding variables were input as root spacing, welding current, arc voltage, and welding speed11). Kim improved the accuracy of the mathematical model for the width and height of the back-bead and predicted the shape of the back-bead by using statistical methods in the open-gap type of pipeline joining process12). Nagesh used the backpropagation neural network algorithm to correlate the welding process variables with the characteristic variables of the weld bead geometry and penetration, and investigated the prediction of the bead shape and penetration using this13). Cho analyzed the behavior of the molten pool and the weld bead in V-groove GMA welding with and without a root gap for various welding positions14). Jesús Emilio Pinto-Lopera developed a system that measured the width and height of weld bead in real-time in the GMAW process using a vision camera and optical sensor, and compared it with a 3D scanner15). There has been no previous study to predict and detect back-bead formation only using the welding current signal measured in real-time without additional equipment such as a vision camera during butt GMAW.
Recently in many recent scientific experiments, deep learning techniques have demonstrated impressive verification performance through learning based on vast amounts of data collected16-18), and some scholars have proposed a method of detecting a correlation or a critical relationship in the frequency domain in order to determine whether the signal being verified is normal or faulty by using the characteristics of the signal measured in the frequency domain19,20). Chu suggested that in the short circuit transfer mode GMAW process, the time-frequency method is an effective method to detect welding defects and check welding quality21).
This study aims to propose a new algorithm based on Spectrogram-CNN for real-time detection of back-bead generation or not. In the proposed method, the characteristic of the time-frequency image (spectrogram) was generated from the difference in the frequency amplitude of the welding current signal with and without back-bead generation, and trained by a convolution neural network (CNN). In order to accelerate the training of the CNN model, the proposed model was trained using a highly efficient GPU process for unsaturated neurons and convolution tasks. Then, in order to evaluate the performance of the trained model, the performance of the proposed model was verified using new welding data which have not been included in the training data. As a result, the proposed algorithm demonstrated excellent detection performance regarding back-bead formation.

2. Experimental Procedures

The welding material used in this study was GA 590 galvanized steel of 2.3 mm in thickness. Table 1 shows the chemical composition and mechanical properties of the specimen material. As shown in Fig. 1, the welding test sheet was processed in a 150 mm×150 mm standard and used for the welding experiment, and the welding experiment was performed in a butt joint method at a working angle of 90° by a 0° torch.
Table 1
Chemical composition and mechanical properties of the base metal
GA 590 Chemical compositions (wt%) mechanical properties
C Mn Si S P Fe YS (MPa) TS (MPa) EI (%)
0.0825 1.440 0.132 0.002 0.011 Bal 583 628 25
Fig. 1
Welding test sheets
jwj-39-2-198gf1.jpg
The welding system used GMAW (Austria: Fronious TPS-4000) in constant voltage short-circuit transition mode, and it was carried out by a welding stage moving in the x, y 2 axis direction. The welding current and voltage values in GAMW were automatically controlled according to the wire feed rate by the synergy mode program. Fig. 2 shows the configuration of the welding system.
Fig. 2
Welding signal measurement system
jwj-39-2-198gf2.jpg
The welding conditions are shown in Table 2. To protect the welding bead from oxidation, a mixed gas of Ar-90% and CO2-10% was used as the shielding gas, and the AWS A5.18 ER70S-3 standard wire of 1.2 mmØ was used as the welding wire. The contact tip to workpiece distance (CTWD) was fixed at 15 mm. The total welding length was 140 mm and the welding experiment was conducted for about 14 seconds. The wire feed rate (WFR) was set to 4 m/min, and the experiment was repeated three times in total, taking into account the cases with and without gaps, respectively. The welding signal data from two experiments were used as training data, and the welding current signal data from one experiment was used as validation data. During the welding experiment, the welding current and arc voltage signals were measured in real-time with a DAQ data acquisition device (NI DAQ 9229) at a sampling rate of 10 kHz per second.
Table 2
Welding conditions
Welding parameters Parameter values
Welding speed 600 mm/min
CTWD 15 mm
Wire feed rate 4 m/min
Welding joint Butt joint
Welding wire A5.18 ER70S-3 standard wire
Shield gas Ar-90% + CO2-10%
Root gap 0, 0.5 mm
welding mode short-circuiting GMAW

3. Results and Analysis

3.1 Welding Current Signal Analysis Using STFT

In this study, a time-frequency analysis was performed using the Short-time Fourier transform (STFT) method by overlapping 90% of the 0.1-second interval on the welding signal. Time-frequency analysis describes how the frequency information of a signal changes over time. Since the welding current signals have time-series characteristics and are nonstationary signals, the local Fourier transform can introduce local frequency parameters through a window of a constant size by analyzing these nonstationary signals. STFT can express the time-frequency of a signal and can be expressed mathematically as shown in Equation (1).
(1)
DSTFT(t,n)=s(t)·x(tt)ej2πntdt
Where, DSTFT (t,n) denotes the time-frequency spectrum of the input welding current signal s(t’) and t and n denote data sampled in the time domain and the frequency domain, respectively. The basic function of the STFT is to place (t’-t) on the time axis t, created by the window function, x(t’-t)Where, t and n denote modulation and conversion parameters, respectively. In this study, the attenuation of the side lobe was large while the width of the main lobe was somewhat narrow; therefore, the Hanning window function, which could eliminate discontinuities, was used as a window function.
As shown in Fig. 3, under the condition of 4 m/min feeding speed, the back-bead was uniformly formed in the experiment with a root gap, whereas no back-bead was formed in the experiment without a root gap. The analysis was performed in the frequency domain and the time-frequency domain by selecting the current signal in the 2-5 second section, excluding the 1-second section at the beginning and the end, where the welding signal was unstable, of the entire welding area. The welding current signal was cut into short data frames at 0.1-second intervals, and then each frame was overlapped by 90%, and the welding current signals transformed over time were each Fast Fourier Fransform (FFT). The output of the continuous STFT was expressed in the time-frequency domain of the welding current signal, and it was compared and analyzed for the welding current signal in which the back-bead was generated and the welding current signal in which the back-bead was not generated by STFT on the data divided into small frames that were sequentially overlapped.
Fig. 3
Weld bead surface of WFR: 4m/min, welding current: 156A, Voltage: 18.3V, weld gap: 0mm and 0.5mm under a constant welding conditions. (a), (c) front view. (b), (d) back side view
jwj-39-2-198gf3.jpg
Fig. 4 shows the result of analyzing the frequency component of the welding current signal in the selected section. Where, the main frequency component of the current signal without back-bead formation was observed in the 57 Hz band; the main frequency component of the current signal with back-bead formation was observed in the 53 Hz band, which was somewhat lower than that of the current signal without back-bead formation. The frequency peak showed a harmonic shape in the frequency domain of the section with back-bead formation.
Fig. 4
FFT result of welding current signal: (a) WFR: 4 m/min, root gap: 0 mm, (b) WFR: 4 m/min, root gap: 0.5 mm
jwj-39-2-198gf4.jpg
The x-axis and y-axis of the spectrogram represented the time and frequency bands, respectively, and the z-axis represented the magnitude value of the frequency for a specific time in color. In this study, based on the information observed in the frequency analysis of the welding signal, a similar difference was identified in the spectrogram according to the back-bead formation. In Fig. 5 (a) and (b), the current signal of the selected 3-second interval among the data acquired at a sampling rate of 10 kHz per second was converted into a spectrogram image in the 0-5000 Hz frequency band. For more accurate analysis, Fig. 5 (c) and (e) show the enlarged spectrogram in the 0-1000 Hz frequency band of the 0.1-second interval used as actual training data. Where, Fig. 5 (d) and (f) are respectively shown the 3D surfaces of Fig. 5 (c) and (e) were expressed. In contrary to the magnitude line of the with back-bead spectrogram clearly divided due to the frequency component in a harmonic shape in the current signal, the magnitude line of the spectrogram of the current data without back-bead formation was not clearly separated and was evenly distributed in the 0-200 Hz band.
Fig. 5
The time-frequency analysis for welding current signal in different welding condition: (a), (b) Spectrogram of selected regions for data 1 and data 2, (c) Without back-bead spectrogram, (d) 3D surface of (c), (e) With back-bead spectrogram, (f) 3D surface of (d)
jwj-39-2-198gf5.jpg

3.2 CNN Theory and Performance Evaluation

3.2.1 Training process

Fig. 6 shows the flowchart of the back-bead detection algorithm presented in this study. It shows a supervised learning-based CNN structure that labels each training data in the input layer and performs training. The weights of the convolution layer kernel are trained by the back propagation algorithm in all training processes. The proposed CNN model consisted of an input layer, three convolution layers, two subsampling layers, a fully connected layer, and an output layer, and training was performed in a keras22)-based environment. Rectified linear units (ReLU) were selected as the activation function and placed in the layer after all convolutional layers. The dropout function23) was set after the maxpooling layer and the fully connected layer, respectively, to prevent overfitting, and the dropout rate was set to 0.5. The error function was calculated using the Adam optimizer24), which updates the weights by storing the exponential mean of the slope and the exponential mean of the slope squared.
Fig. 6
An overview of proposed back-bead detection method using CNN structure
jwj-39-2-198gf6.jpg
In general, the convolution operation in the convolutional layer was performed by an N×N kernel. In the CNN training model proposed in this study, a feature map was generated by a kernel and convolution operation in the convolution layer using images of 128 × 128 pixels (with 3 color planes) as input. Therefore, the number of feature maps was the same as the number of kernels. In the convolution layer, the kernel size was set to 3×3, stride 1, and the max pooling layer kernel size was set to 2×2 and stride was set to 1 for subsampling. In the last fully connected layer, a fully connected neural network was constructed using the features extracted from the upper convolution layer as an input of a one-dimension vector form (64×30×30=57600 parameters), and the probability value for the class to which the data value entered by the weight of each parameter belonged was expressed by the softmax function. Finally, training was performed by setting the output layer to output 1 when back-bead generated and 0 when back-bead not generated.
According to the experimental plan in Section 2, the data acquired through a total of three experiments were composed of a total of 4760 training datasets by overlapping the 0.1-second interval by 90% excluding 1 second at the start and end of welding. As a result of performing a total of 200 epochs with the initial learning rate set to 0.001 and the batch size set to 16, the accuracy converged to 1, and the loss value to 0 as shown in Fig. 7. In Fig. 7, no overfitting phenomenon in which validation loss increased during the learning process was observed. Of the total training dataset, 952 image data, accounting for 20%, were randomly extracted and used as validation data, and the validation results in the training process were shown in Fig. 8. As a result of predicting that no back-bead was formed for the input data, 939 out of 952 validation data were accurately predicted with 13 incorrectly predicted, demonstrating a validation accuracy of 98.6%. In the opposite case, 946 of the 952 validation data generated by the back-bead were accurately predicted, with six incorrectly predicted, demonstrating an accuracy of 99.3%, which was identified in the training process of the CNN model.
Fig. 7
The recognition of training accuracy, validation accuracy, training loss, validation loss of proposed CNN structure
jwj-39-2-198gf7.jpg
Fig. 8
Confusion matrix diagram of proposed CNN structure
jwj-39-2-198gf8.jpg

3.2.2 Testing process

As shown in Fig. 9, in order to verify the detection performance of the CNN model, the welding current signal, used in the training data, was applied to the CNN model to verify the performance. The results for the back-bead detection performance were shown in Fig. 10 and Table 4 for 120 verification data sets without overlapping at 0.1-second intervals among the welding current data of the remaining 12 seconds after excluding the unstable current data (1 second in the beginning and the end, respectively) from the entire data measured for 14 seconds in the actual welding section.
Fig. 9
Untrained weld current signal data: (a) Without back-bead generation, (b)With back-bead generation
jwj-39-2-198gf9.jpg
Fig. 10
The result of proposed algorithm classification performance: (a) Without back-bead generation data accuracy, (b) With back-bead generation data accuracy
jwj-39-2-198gf10.jpg
Table 3
Validation data classification result
Variable Validation samples Estimated Error Accuracy
Class 0 952 939 13 98.6 %
Class 1 952 946 6 99.3 %
Table 4
Final test data classification result
Variable Test samples Estimated Error Accuracy
Class 0 120 113 7 94.2 %
Class 1 120 115 5 95.8 %
Out of a total of 240 test samples, 120 pieces of data with and without back-bead formation were included. As a result of verifying new welding data not included in the training data with the proposed algorithm, a total of seven detection errors occurred in 120 test samples without back-bead formation, demonstrating a detection accuracy of 94.2%, five detection errors occurred in 120 test samples with back-bead formation, demonstrating a detection accuracy of 95.8%. More errors occurred in the result of without back-bead generation (class 0) than result of with back-bead generation (class 1), similarly to the validation results from the training process. Therefore, it can be explained that the CNN based back-bead detection model proposed in this study has excellent detection performance.

4. Conclusion

In this study, butt GMA welding was performed using GA 590 MPa grade galvanized steel sheet. After analyzing the acquired welding current signal in the frequency domain, the features of a time-frequency image, a spectrogram, were applied to the CNN model based on the difference in the frequency amplitude value of the welding current signal to develop an algorithm for predicting back-bead formation. As a result of this study, the following conclusions were drawn:
  • 1) In this study, the spectrogram image obtained by the STFT frequency conversion was extracted by measuring the welding current generated in the GMAW process, and it was trained and validated by labeling with 0 and 1 classes indicating whether or not a bead was formed

  • 2) The difference in the shape of the spectrogram image acquired in the time-frequency domain transform with and without back-bead formation was identified, and the input spectrogram image was visualized as a feature map formed in each layer of CNN to show the difference between the shape of the feature map with and without back-bead.

  • 3) The prediction performance of the proposed CNN model was verified, and the detection performance was 95.8 % and 94.2 % for the regions with and without back-bead, respectively, as a result of applying it to a new welding data.

Acknowledgment

This work was supported by “Human Resources Program in Energy Technology” of the Korea Institute of Energy Technology Evaluation and Planning (KETEP), granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea. (No. 20204030-200100)

References

1. H. B. Cary Ed. Modern Welding Technology. 2nd Edi-tion. Upper Saddle River; New Jersey, USA: (1988)
2. T. S. Hong, M. Ghobakhloo, and W. Khaksar, Robotic welding technology, Compr. Mater. Process. 6 (2014) 77–99.
[CROSSREF] 
3. C. Chen, N. Lv, and S. Chen, Data-Driven welding expert system structure based on internet of things. Trans-actions on Intelligent Welding Manufacturing, Springer; Singapore: (2018) 45–60. https://doi.org/10.1007/978-981-10-8330-3_3
[CROSSREF] 
4. C. Y. Wu, P. C. Tung, and C. C. Fuh, Development of an automatic arc welding system using an adaptive sliding mode control, J. Intell. Manuf. 21(4) (2018) 355–362. https://doi.org/10.1007/s10845-008-0184-3
[CROSSREF] 
5. P. Kah, M. Sgrestha, E. Hiltunen, and J. Martikainen, Robotic arc welding sensors and programming in industrial applications, Int. J. Mech. Mater. Eng. 10(13) (2015) 1–16. https://doi.org/10.1186/s40712-015-0042-y
[CROSSREF] 
6. M. Kim, S. Shin, D. Kim, and S. Rhee, A Study on the Algorithm for Determining Back Bead Generation in GMA Welding Using Deep Learning, J. Weld. Join. 36(2) (2018) 74–81. https://doi.org/10.5781/JWJ.2018.36.2.11
[CROSSREF] 
7. N. Jeon, S. Rhee, and D. Kam, Parametric Study of Selfiercing Riveting for CFRP-Aluminum Dissimilar Joint, J. Weld. Join. 36(3) (2018) 8–17. https://doi.org/10.5781/JWJ.2018.36.3.2
[CROSSREF] 
8. K. Eguchi, S. Yamane, H. Sugi, T. Kubota, and K. Oshima, Back bead control of the one-side robotic welding with visual sensor-cooperative control of current-waveform and torch motion for change of gap and welding position, Proceedings of the 24th Annual Conference of the IEEE Industrial Electronics Society, Aachen, Germany. (1998) 1182–1185. https://doi.org/10.1109/IECON.1998.724267
[CROSSREF] 
9. J. I. Lee, S. H. Rhee, and G. W. Uhm, A study on development of system for prediction of process parameters by using multiple regression analysis in back-bead of gas metal arc welding, Proceedings of Welding and Joining Conference. (1999) 203–206.
10. J. W. Jeong, I. S. Kim, C. E. Park, H. H. Kim, J. H. Seo, and I. J. Kim, A study on the prediction of the width of the back bead using a neural network, Proceedings of Korean Society of Production and Manu-facturing Conference. Mokpo National University; (2008) 371–375.
11. J. I. Lee and B. K. Koh, Back-bead prediction and weldability estimation using an artificial neural network, Trans. Korean Soc. Mach. Tool. Eng. 16 (2007) 79–86.
12. J. Kim, I. S. Kim, H. H. Na, and J. H. Lee, An experi-mental study on prediction of back-bead geometry in pipeline using the GMA welding process, J. Korean Soc. Manuf. Technol. Eng. 20 (2011) 74–80.
13. D. S. Nagesh and G. L. Datta, Prediction of weld bead geometry and penetration in shielded metal-arc welding using artificial neural networks, J. Mater. Process. Technol. 123(2) (2002) 303–312. https://doi.org/10.1016/S0924-0136(02)00101-2
[CROSSREF] 
14. D. W. Cho, S. J. Na, M. H. Cho, and J. S. Lee, A study on V-groove GMAW for various welding positions, J. Mater. Process. Technol. 213(9) (2013) 1640–1652. https://doi.org/10.1016/j.jmatprotec.2013.02.015
[CROSSREF] 
15. J. E. Pinto-Lopera, S. T. Motta, and S. Absi Alfaro, Real-time measurement of width and height of weld beads in GMAW processes, Sensors. 16(9) (2016) 1500https://doi.org/10.3390/s16091500
[CROSSREF]  [PUBMED]  [PMC] 
16. Y. Lecun, Y. Bengio, and G. Hinton, Deep learning, Nature. 521(7553) (2015) 436–444.
[CROSSREF]  [PUBMED] 
17. A. Krizhevsky, I. Sutskever, and G. Hinton. Image Net classification with deep convolutional neural networks. Proceedings of the International Conference on Neural Information Processing Systems; New York, USA: (2012), p. 1097–1105
[CROSSREF] 
18. K. Simonyan and A. Zisserman. (2014) Very deep convolutional networks for large-scale image recognition. Proceedings of International Conference on Learning Representations; Banff, AB, Canada: (2015), p. 1–14
19. F. J. Meyer, J. B. Nicoll, and A. P. Doulgeris, Correc-tion and characterization of radio frequency interference signatures in L-band synthetic aperture radar data, IEEE Trans. Geosci. Remote Sens. 51(10) (2013) 4961–4972. https://doi.org/10.1109/TGRS.2013.2252469
[CROSSREF] 
20. F. Zhou, M. Xing, X. Bai, G. Sun, and Z. Bao, Narrow-band interference suppression for SAR based on complex empirical mode decomposition, IEEE Geosci. Remote Sens. Lett. 6(3) (2009) 423–427. https://doi.org/10.1109/LGRS.2009.2015340
[CROSSREF] 
21. Y. X. Chu, S. J. Hu, W. K. Hou, P. C. Wang, and S. P. Marin, Signature analysis for quality monitoring in short-circuit GMAW, Weld J. 83(12) (2004) 336–343.
22. Francois Chollet. Keras: Deep Learning library for theano and tensorflow; https://github.com/fchollet/keras(2015)
23. N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, Dropout:a simple way to prevent neural networks from overfitting, J. Mach. Learn. Res. 15(1) (2014) 1929–1958. https://doi.org/10.1016/0370-2693(93)90272-J
[CROSSREF] 
24. D. P. Kingma and J. Ba. Adam:A method for stochastic optimization. (2014), arXiv14126980[Online]. Available:http://arxiv.org/abs/1412.6980


ABOUT
BROWSE ARTICLES
ARTICLE CATEGORY 
FOR CONTRIBUTORS
Editorial Office
#304, San-Jeong Building, 23, Gukhoe-daero 66-gil, Yeongdeungpo-gu, Seoul 07237, Korea
Tel: +82-2-538-6511    Fax: +82-2-538-6510    E-mail: koweld@kwjs.or.kr                

Copyright © 2024 by The Korean Welding and Joining Society.

Developed in M2PI