Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Parallel implement of real-time delta-sigma modulation for digital mobile fronthaul

Open Access Open Access

Abstract

We propose and experimentally demonstrate a novel FPGA-based parallel architecture for delta-sigma modulation (DSM) for digital mobile fronthaul employing the DSM interface. This architecture breaks the limitations of the feedback loop in DSM, is not constrained by critical paths, and supports fully parallel processing, so it can deal with high sampling rates at low hardware operating speeds. In contrast to other parallel schemes, the proposed bit-by-bit quantization DSM avoid significant storage resources requirements for buffering. A real-time experimental system using Xilinx Kintex Ultrascale FPGA was implemented to validate the feasibility of the proposed architecture. 14 carrier aggregated orthogonal frequency division multiplexing (OFDM) signals are digitized by DSM into a 5Gb/s PAM4 signal and transmitted over a 20 km single-mode fiber (SMF). As a waveform-agnostic digitization interface, we also experimentally demonstrated the DSM with 14 carrier aggregated filter-bank-multicarrier (FBMC) signals, which can achieve better EVM performance.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

With the significant growth of mobile data, cloud-radio access network (C-RAN) has been proposed as a more flexible and high-capacity new wireless access network architecture [1,2], which includes the mobile backhaul(MBH) from the evolved packet core (EPC) to the baseband processing units (BBUs) and the mobile fronthaul(MFH) from the BBUs to the remote radio heads (RRHs) [3,4]. According to the transmission format of signals in mobile fronthaul, mobile fronthaul can be divided into analog mobile fronthaul and digital mobile fronthaul [5,6]. The analog MFH system, which transmits mobile signals using analog waveforms, has the advantages of low system cost and high spectral efficiency, but it is more susceptible to noise and transmission impairments in analog circuits [7,8]. On the other hand, digital MFH using the common public radio interface (CPRI) to sample and quantize signals has better tolerance to noise and transmission impairments [9]. However, it has a lower spectrum efficiency and cannot support multiplexing, large-scale MIMO, and large-scale carrier aggregation (CA) [10,11]. To overcome the data rate bottleneck of CPRI, the Ethernet CPRI (eCPRI) was developed to redistribute the functions of RAN, including high layer split and low layer split, which can reduce the requirements for fronthaul speed [12]. Nonetheless, due to the relocation of low_PHY from BBU to RRH, there is a substantial escalation in complexity and cost of RRH, posing challenges to the adoption of centralized MFH. Integrating the advantages of both analog and digital MFH, DSM has been proposed as an alternative solution to CPRI, which can provide up to four times the capacity of CPRI [13], which can provide up to four times the capacity of CPRI [10,14]. The principle of delta-sigma modulator involves oversampling the digital signal at a sampling rate much higher than the Nyquist rate, and then using noise shaping techniques to push the quantization noise within the signal bandwidth out of the band. At the receiver, a filter is used to remove the out-of-band noise, thereby reducing the in-band noise and achieving a high signal-to-quantization noise ratio (SQNR) with a low quantization bit width [15,16].

However, the actual operating frequency of DSM is typically restricted and relies on the hardware implementation, as the feedback loops in the structure are difficult to process in parallel. To address this challenge, a time-interleaved delta-sigma modulator (TI-DSM) was proposed in Ref. [1719] using multi-rate theory. The converted DSM can simultaneously process multiple input signals in one clock cycle and produce multiple output signals. In Ref. [20,21], a bit-separation architecture is proposed, which segregates the input signal into high-order bit and low-order bit, with the high-order bit encompassing all possible scenarios being stored in a dedicated memory module. These methods have improved the sampling rate of DSM, but have not overcome the limitations of the critical path in the DSM feedback loop. Reference [2224] propose a truncated architecture using a multi-core DSM to convert parallel signals into serial signals, where the input signal is divided into multiple blocks and each block is processed by a DSM core. However, due to the discontinuity between blocks, the performance of the DSM is affected, and this architecture requires a significant amount of register resources to cache and serialize the data. Reference [25] proposes a bit-reduction structure that enumerates the possible output values of a Mash-1-1 low-pass DSM to infer the output of subsequent moments. However, this method is only applicable for quantizing 2-bit data into 1-bit data. Nevertheless, in practical DSM systems, the input analog signals typically require multiple bits for representation, thus further research is needed to explore DSM structure which can quantize multi-bit inputs and consume less register resources.

In this paper, we propose a bit-by-bit quantization DSM structure. By quantizing the data one bit by one bit, we can achieve N-bit data quantization with a 2-bit output. The proposed structure is not subject to the constraints of the feedback loop's critical path and does not compromise the noise shaping capability of the DSM. We demonstrate a system of 20-km SMF with the real-time transmitter for DSM. OFDM signals aggregated by 14 carriers are digitized by DSM and transmitted over intensity modulation-direct detection (IM-DD) links. Experimental results show that the EVM of the aggregated signal after transmission is less than 3.6% in which the EVM of the first nine component carriers (CCs) is less than 2%, and the EVM of the last five CCs is less than 8%. These results support the transmission of 256QAM and 64QAM, respectively. As a waveform-agnostic digital interface, we further validate the 5 G compatibility of DSM by employing FBMC signals.

2. Hardware design

DSM continuously filters the noise generated by the quantizer through the feedback loop, and reshapes the spectrum of quantization noise. However, the feedback loop limits the clock rate of the DSM and makes it difficult to process in parallel. Overcoming the critical path within the feedback loop has always been a major hurdle in parallel DSM design. This section will present a fully parallel algorithm that is suitable for quantizing N-bits to M-bits (M < N).

2.1 Bit-reduction DSM

According to the structure of the DSM, and output Y can be expressed as:

$$Y(z )= X(z )+ H(z )\cdot E(z )$$
Where X is the input signal, and E is the difference before and after quantization, which is the quantization noise. Observing the expression of Y, assuming H is a second-order finite impulse response (FIR) filter, it is not difficult to see that the result of $Y[z ]$ is derived from $X[z ]$, $E[z ]$, $E[{z - 1} ]$, and $E[{z - 2} ]$. The quantization noise E generated at the $i$-th moment will only affect the outputs Y at the $({i + 1} )$-th and $({i + 2} )$-th moments. In parallel processing, assuming the parallelism degree is p, it is impossible for p parallel channels of data being processed simultaneously at the same moment because the output Y requires the quantization noise E from the previous two data. This implies that there is a certain processing delay involved in quantizing and outputting p parallel channels of data. However, when the next clock input X will refresh the new p data, quantizing the data of the 1-th and 2-th channels requires quantization noise E from the $({p - 1} )$-th and $p$-th channels of the previous clock, which has not yet been processed within one clock cycle. The traditional pipelining algorithm for parallel processing clearly fails in this case. The Ref. [25] introduces a proactive bit-reduction structure. Assuming that the possible values of X and E are both a small finite set, which can be easily enumerated, then by traversing all possible cases, the possible values of the signal before quantization S can also be enumerated, which can be expressed as:
$$ S(z)=X(z)-2 E(z-1)+E(z-2). $$

Enumerating the possible values of the current moment's 1-th and 2-th channels data, which are affected by quantization noise $E({z - 1} )$ and $E({z - 2} )$ from the previous moment's $({p - 1} )$-th and $p$-th channels. Assuming there are L cases resulting from permuting and combining these possible values of S, the quantized results of p channels of data under these L cases are computed in parallel. When the quantization noise E calculation for the previous moment's $({p - 1} )$-th and $p$-th channels is completed, the influence on the 1-th and 2-th channel data of the current moment can also be determined. Therefore, we can select the correct one from the L parallel cases. Then the quantization noise of the $({p - 1} )$-th and $p$-th channels in this correct case can help us select the correct result in parallel L cases at the next clock. By iterating this process, we can achieve pipelined computation. However, since all these cases require exhaustive calculations, considering the limited resources of FPGA, L cannot be a very large number. In Ref. [25], the input X is a 2-bit number and the quantized output Y is a 1-bit number. The values of S can be divided into 5 categories, resulting in a total of 25 possible combinations ($L = 5\; \ast \; 5 = 25$).

2.2 Bit-by-bit quantization DSM

The bit-reduction technique only achieved quantization from 2 bits to 1 bit. However, in practical systems, the transmitted signals at the transmitter end are usually represented by multiple bits. If $N$-bits data is quantized to $M$-bits, there can be as many as $2\text{\^{}}(N - M)$ possible values for S, and there are $2\text{\^{}}({N - M + 1} )$ cases that need to be processed in parallel, which poses a significant challenge to the resources of FPGA. Obviously, the method of bit-reduction is no longer applicable. In order to reduce the usage of FPGA registers and latency constraints, we propose a novel bit-by-bit quantization DSM structure, which enables the implementation of multi-bit input DSM. The basic idea is to quantize multi-bit one by one, so that the number of parallel cases required for each quantization is acceptable. The second-order DSM is shown in Fig. 1. (a), and the hardware implementation block diagram is shown in Fig. 1. (b).

 figure: Fig. 1.

Fig. 1. (a) The structure of the second-order DSM (b) The hardware implementation block diagram of DSM.

Download Full Size | PDF

The DSM system consists of two modules: oversampling and noise shaping. In this paper, a polyphase interpolating filter with 8 phases is used to achieve 8x oversampling. Each phase filter in the multi-phase interpolating filter is implemented using a 2-way parallel fast finite impulse response algorithm (FFA) [26], resulting in a final implementation of a 16-way parallel FIR. In the noise shaping module, we propose a bit-by-bit quantization structure based on the aforementioned bit reduction scheme. The basic idea is to perform repeated least significant bit (LSB) quantization on multi-bit numbers, with each quantization step quantizing one bit at a time. The LSB quantization module is illustrated in Fig. 2.

 figure: Fig. 2.

Fig. 2. Signal processing flow of the LSB quantization.

Download Full Size | PDF

The LSB quantization module includes modules such as input data reorganization, quantization, multiplexer, and adder. In the data reordering module, assuming the input data consists of N bits, the LSB of the 16-channel data at the $i$-th moment and the first two channels at the $({i + 1} )$-th moment are concatenated to form the input of the quantization module. The remaining $N - 1$ bits of high-order data are stored in a buffer for further processing.

In the quantization module, as shown in Fig. 3, the input consists only of the LSB data, which is 1 bit. After quantization, all the bits will be set to 0, indicating the completion of 1-bit quantization. Unlike the bit-reduction structure, there is no need to iterate through all possible scenarios of S in this process. We only need to compare whether the LSB of the data, denoted as Q, is 0 or 1. If Q is 1, the quantization noise of that data (blue square in Fig. 3) is also 1. We subtract 1 from it to set the Q to 0, and perform +2 and -1 operations on the subsequent two data streams (yellow squares in Fig. 3) affected by this bit of quantization noise. If Q is 0, no operations are performed. It is worth noting that in such quantization process, a carry signal C may occur. In contrast to the conventional bit-reduction architecture where quantization of the carry signal is required, in this context, we solely retain the carry signal and pass it on to the subsequent module. For instance, consider a data to be quantized as 11 (with Q as 1 and C as 1). Instead of quantizing the entire 11 to 0 and applying +6 and -3 to the subsequent two data streams, we only quantize the Q from 1 to 0 and perform individual operations of +2 and -1 on the following two data streams. By focusing solely on the LSB, this approach greatly enhances the algorithm's versatility as it reduces the need to iterate through all possible cases of S. Furthermore, when parallelizing the exhaustive exploration of possible scenarios for $Q_0^0$ and $Q_1^0$, it is sufficient to consider only the possible values of the LSB, namely 0 and 1. This reduction in the number of parallel operations to 4 significantly decreases resource consumption. Additionally, ${Q_p}$ and ${Q_{p + 1}}$ at the $i$-th moment are used as the keys for the four-to-one multiplexer to select one of the four parallel cases at the $({i + 1} )$-th moment. For the output of the multiplexer, we only utilize its carry data C while its Q is 0. The high $(N - 1$) bits of the original data from the buffer, combined with the carry data C from the quantization output, serve as the input data for the next LSB quantization module. Moreover, in the adder, the carry data ${C_p}$ and ${C_{p + 1}}$ at the $i$-th moment should also be considered as part of the two previous data at the $({i + 1} )$-th moment.

 figure: Fig. 3.

Fig. 3. The principle of quantization processing.

Download Full Size | PDF

As shown in black in Fig. 4, the amplitude of the input signal has an impact on DSM performance, with a smaller amplitude resulting in a low signal-to-noise ratio (SNR), while exceeding the appropriate input range can lead to instability such as quantizer overload. In addition, in hardware implementation, when quantizing to higher bit positions, the addition of the input signal and carry signal $\textrm{}C$ may cause overflow beyond the original bit width. At this point, it is no longer sufficient to consider only whether LSB is 1 or 0. As mentioned in Ref. [23], we need to clip $\textrm{}S$ and increase some possible values of S (0,1,-1,2), where choosing the appropriate input signal amplitude can help us reduce the possibility of S. As shown in red and green in Fig. 4, the clipping DSM does not significantly affect the performance of DSM, and the bit-by-bit DSM maintains performance parity with the normal DSM.

 figure: Fig. 4.

Fig. 4. SNR versus amplitude of input signal.

Download Full Size | PDF

3. Experimental setup and results

3.1 Experimental setup

Using the bit-by-bit quantization structure in Fig. 1. (b), the real-time DSM structure is implemented on a Xilinx Virtex Ultrascale KU115 FPGA, shown in Fig. 5. (b). The experimental setup is shown in Fig. 5. In BBU, 14 CCs are aggregated by digital signal processing (DSP) with a sampling rate of 625-MSa/s. Each CC has an IFFT/FFT size of 2048, with 1200 data-carrying subcarriers, a cyclic prefix (CP) length of 144, and the subcarrier spacing is 15.26kHz. Before the aggregation of OFDM subcarriers, each CC is upsampled by a factor of 20 and then frequency-shifted to different center frequencies with the guard band between CCs is 228.9kHz. Finally, the upsampled and frequency-shifted CCs are combined in the time domain to obtain the aggregated broadband signal. After aggregation, the sampling rate of the signal is 625MSa/s, and the power spectrum of the aggregated signal is shown in Fig. 5. (d). The aggregated signal, which is stored in a read-only memory (ROM), is cyclically transmitted in a 2-way parallel manner in the transmitter FPGA and digitized into PAM4 signal. In the DSM, the aggregated signal, parallelized into 2 channels, undergoes an 8x oversampling to 16 channels, with each channel operating at a frequency of 312.5 MHz. Noise shaping techniques are then employed to shape the spectrum of the quantization noise, aiming to push as much of the quantization noise out of the signal band as possible. The spectrum of the PAM4 signal obtained after DSM digitization is shown in Fig. 5. (e). After DSM digitization, the output 5-Gb/s PAM4 signal is outputted via the FMC DAC on FPGA. The PAM4 signal is amplified by an electrical amplifier (EA) and used to drive a Mach-Zehnder modulator (MZM) to generate an optical signal, where the optical carrier is emitted by an external cavity laser (ECL) operating at a wavelength of 1550 nm with an optical power of 15dBm. At the receiver end, the optical signal transmitted over a 20 km SMF is converted into an electrical signal by a 40Gb/s high-gain photodiode (PD) operating. The electrical signal is then captured and processed offline using a high-bandwidth modular oscilloscope, LABMASTER10-36ZI-A, with a sampling rate set at 10GSa/s. In the offline DSP processing, the captured signal is first subjected to decision and equalization to recover the PAM4 signal, which is sufficient to compensate for the inter-symbol interference caused by fiber dispersion during transmission. Then, low-pass filtering is performed to remove high-frequency quantization noise outside the signal bandwidth. Subsequently, the signal is downsampled by a factor of 8 to obtain the original aggregated signal. Finally, after de-aggregation, the EVM value for each CC is calculated.

 figure: Fig. 5.

Fig. 5. (a) Experimental setup. (b) Xilinx Virtex Ultrascale KU115 FPGA on development board with DAC (c) The floor plan of FPGA resource utilization after implementation (d) Power spectrum of 14 carrier aggregated OFDM signals for DSM. (e)Electrical spectrum of 14 carrier aggregated OFDM signals after DSM.

Download Full Size | PDF

Due to the non-uniform distribution of quantization noise within the signal band after DSM shaping, the EVM values of different CCs are also different, resulting in certain frequency bands of CCs being unable to support the transmission of high-order QAM signals. In order to fully utilize the resources in different frequency bands and optimize the transmission scheme, we allocate different modulation formats with varying orders to the CCs in different frequency bands. Specifically, for frequency bands with lower quantization noise, higher-order modulation formats are allocated to the CCs, while for frequency bands with higher quantization noise, relatively lower-order modulation formats are assigned to the CCs.

3.2 Experimental results

Figure 6 shows the results of our experiment, where 14 CCs are used with different modulations assigned on different carriers. When the received optical power is higher than -10dBm, the system noise is mainly caused by quantization noise, and the channel noise can be ignored. Therefore, the EVM remains almost unchanged with the decrease of received optical power. As the received optical power decreases, the channel noise gradually increases, and the system noise is determined by both the channel noise and quantization noise. The error rate of the received PAM4 signal after equalization and decision will increase, resulting in degraded EVM performance. When the received optical power is -13 dBm, the EVM for each CC is shown in Fig. 6. (b). Among the 14 CCs, the EVM for the first 9 CCs (CC1-CC9) is below 3.5%, which enables support for 256QAM modulation. The EVM for the remaining 5 CCs (CC10-CC14) is below 8%, allowing for support of 64QAM modulation. Constellations of CC 3, 9, 10, and 14 are shown in Fig. 6. (c), corresponding to the best and worst cases of 256QAM and 64QAM, respectively.

 figure: Fig. 6.

Fig. 6. Experimental results. (a) EVM vs received optical power for OFDM and FBMC signals. (b) EVM of each CC at -13 dBm of (a). (c) Corresponding to the best and worst cases of 256QAM and 64QAM, respectively. (d) The processing delay shown by ILA.

Download Full Size | PDF

In addition, as a waveform-agnostic digital interface, DSM can not only quantize and digitize OFDM signals, but also digitize 5 G multicarrier waveforms. We experimentally validate its 5 G compatibility by aggregating FBMC signals. In the experiment, the parameters of each FBMC CC are set to be consistent with the parameters of the OFDM CC. Each FBMC signal is filtered by a prototype filter with coefficients of $H\; = \; [{0,\; 0.972,\; 0.707,\; 0.235} ]$, and the overlap factor is 4. Curves in Fig. 6. (a) depict the EVM versus ROP for OBTB and 20-km SMF transmission. Compared to OFDM signals, FBMC signals have the advantage of lower out-of-band power leakage. At the same ROP, FBMC signals exhibit lower EVM, as plotted in green in Fig. 6. (a). The first 9 (1-9) CCs have an EVM below 2% and can support 1024QAM, CCs 10-13 have an EVM below 3.5% and can support 256QAM, while the last CC has an EVM below 8% and can support 64QAM, although this is not further experimental demonstrated here.

The processing delay of the proposed DSM circuit is shown in Fig. 6. (d) as illustrated by integrated logic analyzer (ILA). The green waveform is the system clock, the pink waveform is the input 3-bit signal, and the yellow waveform is the output 2-bit signal. The processing delay of one-bit quantization is 20 clk, in which 18 clk is required for sequential quantization of input 18-channels signals, 1clk is required for selection of parallel cases, and 1clk is required for data summation and output.

The implementation cost of DSM is 1.35% (18004) FF, 1.95% (12927) LUT, and 1.55% (4570) memory LUT (LUTRAM), which is lower than the fully parallel DSM structure used in Ref. [13] where the extensive use of registers for serial-to-parallel conversion buffer. Moreover, due to the reduced parallel cases in each bit quantization, the resources required for each level of bit-by-bit quantization are also smaller compared to the bit-reduction structure proposed in Ref. [3].

4. Conclusion

In this paper, we propose and implement a bit-by-bit quantization DSM parallel structure, which is not limited by the critical path of the DSM. Compared with previous parallel DSM schemes, the DSM with bit-by-bit quantization is a less resource-consuming scheme and does not compromise the noise shaping performance of the DSM. We constructed a real-time experimental system to demonstrate the feasibility of the bit-by-bit quantization DSM in digital mobile fronthaul. 14 carrier aggregated OFDM signals, digitized by DSM and transmitted over 20-km SMF with EVMs less than 3.5% or 8%. High-order modulation formats up to 256QAM can be supported. As a waveform-agnostic digitization interface, we also experimentally demonstrated the DSM with 14 carrier aggregated FBMC signals, which can achieve better EVM performance.

Funding

Key-Area Research and Development Program of Guangdong Province (Grant NO. 2021B0101310003); National Natural Science Foundation of China (62271517); Guangdong Basic and Applied Basic Research Foundation (2023B1515020003); Local Innovation and Research Teams Project of Guangdong Pearl River Talents Program (2017BT01X121).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. K. Tanaka and A. Agata, “Next-Generation Optical Access Networks for C-RAN,” in Optical Fiber Communication Conference (OSA, 2015), p. Tu2E.1.

2. A. Checko, H. L. Christiansen, Y. Yan, L. Scolari, G. Kardaras, M. S. Berger, and L. Dittmann, “Cloud RAN for Mobile Networks—A Technology Overview,” IEEE Commun. Surv. Tutorials 17(1), 405–426 (2015). [CrossRef]  

3. T. Pfeiffer, “Next Generation Mobile Fronthaul and Midhaul Architectures [Invited],” J. Opt. Commun. Netw. 7(11), B38 (2015). [CrossRef]  

4. T. Pfeiffer, “Next Generation Mobile Fronthaul Architectures,” in Optical Fiber Communication Conference (OSA, 2015), p. M2J.7.

5. P. Vetter, “Next Generation Optical Access Technologies,” in European Conference and Exhibition on Optical Communication (OSA, 2012), p. Tu.3.G.1.

6. A. Pizzinat, P. Chanclou, F. Saliou, and T. Diallo, “Things You Should Know About Fronthaul,” J. Lightwave Technol. 33(5), 1077–1083 (2015). [CrossRef]  

7. J. Wang, C. Liu, J. Zhang, M. Zhu, M. Xu, F. Lu, L. Cheng, and G.-K. Chang, “Nonlinear Inter-Band Subcarrier Intermodulations of Multi-RAT OFDM Wireless Services in 5 G Heterogeneous Mobile Fronthaul Networks,” J. Lightwave Technol. 34(17), 4089–4103 (2016). [CrossRef]  

8. J. Wang, Z. Yu, K. Ying, J. Zhang, F. Lu, M. Xu, and G.-K. Chang, “Delta-Sigma Modulation for Digital Mobile Fronthaul Enabling Carrier Aggregation of 32 4G-LTE / 30 5G-FBMC Signals in a Single-λ 10-Gb/s IM-DD Channel,” in Optical Fiber Communication Conference (OSA, 2016), p. W1H.2.

9. J. Shi, J. Liu, L. Zhang, L. Zhao, W. Zhou, and J. Yu, “Digital Mobile Fronthaul based on Delta-sigma Modulation and Chirp-managed Signal Transmission,” J. Lightwave Technol. 41(20), 6521–6532 (2023). [CrossRef]  

10. J. Wang, Z. Yu, K. Ying, J. Zhang, F. Lu, M. Xu, L. Cheng, X. Ma, and G.-K. Chang, “Digital Mobile Fronthaul Based on Delta–Sigma Modulation for 32 LTE Carrier Aggregation and FBMC Signals,” J. Opt. Commun. Netw. 9(2), A233 (2017). [CrossRef]  

11. K. Bai, D. Zou, Z. Zhang, Z. Li, W. Wang, Q. Sui, Z. Cao, and F. Li, “Digital Mobile Fronthaul Based on Performance Enhanced Multi-Stage Noise-Shaping Delta-Sigma Modulator,” J. Lightwave Technol. 39(2), 439–447 (2021). [CrossRef]  

12. Enhanced common public radio interface (eCPRI) specification V2.0, 2019. [Online]. Available: http://www.cpri.info/downloads/eCPRI_v_2.0_2019_05_10c.pdf.

13. J. Wang, Z. Jia, L. A. Campos, and C. Knittle, “Delta-Sigma Modulation for Next Generation Fronthaul Interface,” J. Lightwave Technol. 37(12), 2838–2850 (2019). [CrossRef]  

14. L. Zhong, Y. Zou, S. Zhang, X. Dai, J. Zhang, M. Cheng, L. Deng, Q. Yang, and D. Liu, “An SNR-improved Transmitter of Delta-sigma Modulation Supported Ultra-High-Order QAM Signal for Fronthaul/WiFi Applications,” J. Lightwave Technol. 40(9), 2780–2790 (2022). [CrossRef]  

15. J. Wang, Z. Jia, L. A. Campos, and C. Knittle, “Real-Time Demonstration of 5-GSa/s Delta-Sigma Digitization for Ultra-Wide-Bandwidth LTE and 5 G Signals in Next Generation Fronthaul Interface,” in 2018 European Conference on Optical Communication (ECOC) (IEEE, 2018), pp. 1–3.

16. Y. Zhu, X. Fang, L. Yin, L. Zhang, F. Zhang, and W. Hu, “Delta-Sigma Modulation with Coherent Detection for High-Fidelity 4194304-QAM Transmission at 71.5 dB SNR,” in Asia Communications and Photonics Conference 2021 (Optica Publishing Group, 2021), p. T4D.4.

17. A. Bhide, O. E. Najari, B. Mesgarzadeh, and A. Alvandpour, “An 8-GS/s 200-MHz Bandwidth 68-mW DeltaSigma DAC in 65-nm CMOS,” IEEE Trans. Circuits Syst. II 60(7), 387–391 (2013). [CrossRef]  

18. M. M. Ebrahimi, M. Helaoui, and F. M. Ghannouchi, “Time-interleaved delta-sigma modulator for wideband digital GHz transmitter design and SDR applications,” Prog. Electromagn. Res. B 34, 263–281 (2011). [CrossRef]  

19. R. F. Cordeiro, A. S. R. Oliveira, J. Vieira, and N. V. Silva, “Gigasample Time-Interleaved Delta-Sigma Modulator for FPGA-Based All-Digital Transmitters,” in 2014 17th Euromicro Conference on Digital System Design (IEEE, 2014), pp. 222–227.

20. M. Tanio, S. Hori, N. Tawa, and K. Kunihiro, “An FPGA-based all-digital transmitter with 9.6-GHz 2nd order time-interleaved delta-sigma modulation for 500-MHz bandwidth,” in 2017 IEEE MTT-S International Microwave Symposium (IMS) (IEEE, 2017), pp. 149–152.

21. M. Tanio, S. Hori, N. Tawa, T. Yamase, and K. Kunihiro, “An FPGA-based all-digital transmitter with 28-GHz time-interleaved delta-sigma modulation,” in 2016 IEEE MTT-S International Microwave Symposium (IMS) (IEEE, 2016), pp. 1–4.

22. R. F. Cordeiro, A. S. R. Oliveira, J. Vieira, and T. O. e Silva, “Wideband all-digital transmitter based on multicore DSM,” in 2016 IEEE MTT-S International Microwave Symposium (IMS) (IEEE, 2016), pp. 1–4.

23. D. C. Dinis, R. F. Cordeiro, A. S. R. Oliveira, J. Vieira, and T. O. Silva, “Improving the performance of all-digital transmitter based on parallel delta-sigma modulators through propagation of state registers,” in 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS) (IEEE, 2017), pp. 1133–1137.

24. A. Lorences-Riesgo, S. S. Pereira, D. C. Dinis, J. Vieira, A. S. R. Oliveira, and P. P. Monteiro, “Real-Time FPGA-Based Delta-Sigma-Modulation Transmission for 60 GHz Radio-Over-Fiber Fronthaul,” in 2018 European Conference on Optical Communication (ECOC) (IEEE, 2018), pp. 1–3.

25. H. Li, L. Breyne, J. Van Kerrebrouck, M. Verplaetse, C.-Y. Wu, P. Demeester, and G. Torfs, “A 21-GS/s Single-Bit Second-Order Delta–Sigma Modulator for FPGAs,” IEEE Trans. Circuits Syst. II 66(3), 482–486 (2019). [CrossRef]  

26. D. A. Parker and K. K. Parhi, “Area-efficient parallel FIR digital filter implementations,” in Proceedings of International Conference on Application Specific Systems, Architectures and Processors: ASAP ‘96 (IEEE Computer Soc. University of Minnesota, 1996), pp. 93–111.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. (a) The structure of the second-order DSM (b) The hardware implementation block diagram of DSM.
Fig. 2.
Fig. 2. Signal processing flow of the LSB quantization.
Fig. 3.
Fig. 3. The principle of quantization processing.
Fig. 4.
Fig. 4. SNR versus amplitude of input signal.
Fig. 5.
Fig. 5. (a) Experimental setup. (b) Xilinx Virtex Ultrascale KU115 FPGA on development board with DAC (c) The floor plan of FPGA resource utilization after implementation (d) Power spectrum of 14 carrier aggregated OFDM signals for DSM. (e)Electrical spectrum of 14 carrier aggregated OFDM signals after DSM.
Fig. 6.
Fig. 6. Experimental results. (a) EVM vs received optical power for OFDM and FBMC signals. (b) EVM of each CC at -13 dBm of (a). (c) Corresponding to the best and worst cases of 256QAM and 64QAM, respectively. (d) The processing delay shown by ILA.

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

Y ( z ) = X ( z ) + H ( z ) E ( z )
S ( z ) = X ( z ) 2 E ( z 1 ) + E ( z 2 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.