Fig. 1. DNN neural network model assisted system model for polarization code decoding
Fig. 2. (8,4) factor diagram of polarization code BP decoding method
Fig. 3. (8,4) PE information transmission process of the processing unit of the polarization code
Fig. 4. (8,4) Dense Tanner graph (a) and sparse Tanner graph (b) of polarization codes
Fig. 5. (8,4) sparse neural network decoding structure of polarization codes
Fig. 6. Structure diagram of DNN-NOMS neural network decoder
Fig. 7. Comparison of the decoding performance of the neural network model under different network layers
Fig. 8. Evolution of loss function and training parameters
Fig. 9. Performance comparison of different BP decoding methods
Fig. 10. Performance comparison of decoding methods under different code rates
Fig. 11. Performance comparison of decoding methods under different turbulence intensities
Symbol | Meaning | $ {r_{ji}} $ | Check the information passed from nodejto variable node i | $ {q_{ij}} $ | Information passed from variable node i to verification node j | $C{{(i)} }$ | Variable node iis the collection of adjacent verification nodes
| $V{{(j)/i} }$ | The combination of variable nodes adjacent to the check matrixj, in which the variable node i is removed
| $V{{(i)} }$ | Set of check nodes adjacent to variable node i | ${i'}$${j'}$ | Variable node
${i'}$and check node
${j'}$ represent the next value transformed after iteration
| $ {u_i}(0) $ | The reception is a posteriori probability of Yicorresponding to codeword bit xi=0
| $ {u_i}(1) $ | The reception is a posteriori probability of Yi corresponding to codeword bit xi=1
| $ L(x) $ | Log likelihood ratio refers to the logarithm of the ratio of the probability of judging that the node is 0 to the probability of judging that the node is 1 | $ l $ | Represents the L-th hidden layer
|
|
Table 1. Symbol meaning of NOMS decoding method
Parameter | Value | Length of polar code | 1024/4096 | Code rate | 0.25/0.5/0.75 | Turbulence intensity | 0.09/1.193/49.725 | Modulation | PPM |
|
Table 2. Simulation parameters
Turbulence intensity | Normalization factor | Offset factor | Loss1 | Loss2 | 0.09 | 0.44 | 0.89 | 0.4111 | 0.2624 | 1.193 | 0.39 | 0.85 | 0.4723 | 0.2839 | 49.7215 | 0.48 | 0.92 | 0.5426 | 0.3108 |
|
Table 3. Calculation of optimal factor parameters under different turbulence intensity conditions
Decoding methods
operation type
| MS
(40)
| OMS
(40)
| DNN-NOMS
(5)
| Addition/Subtraction
| $ {T_{{\text{MS}}}}\left( {2 N{{\log }_2}N} \right) $819200
| $ {T_{{\text{OMS}}}}\left( {2 N{{\log }_2}N{\text{ + }}1} \right) $819240
| $ TN\left( {2 N{{\log }_2}N{\text{ + }}1} \right) $102405
| Multiplication/Division
| $ {T_{{\text{MS}}}}\left( {2 N{{\log }_2}N} \right) $819200
| $ {T_{{\text{OMS}}}}\left( {2 N{{\log }_2}N} \right) $819200
| $ TN\left( {2 N{{\log }_2}N{\text{ + }}1} \right) $102405
| Compare
| $ {T_{{\text{MS}}}}\left( {2 N{{\log }_2}N} \right) $819200
| $ {T_{{\text{OMS}}}}\left( {2 N{{\log }_2}N} \right) $819200
| $ TN\left( {2 N{{\log }_2}N} \right) $102400
| Storage space | - | - | $ 4 N{\log _2}N $ |
|
Table 4. Comparison of decoding complexity of different BP modified decoding methods