PCI Express Switching and Bridging Architecture

Performance requirements and physical layer limitations have made the parallel bus interconnect architecture a thing of the past. Next-generation designs are increasingly turning chip-to-chip interconnects to serial standards. Since the previous generation of PCI-X and PCI architectures dominated current computing and embedded systems, the serial standard based on existing protocols, PCI ExpressTM (PCIetm), has shown strong momentum in the early stages of the interconnect revolution.


In addition to the ever-increasing number of PCIe processing units and peripheral chips, PCIe switching and bridging devices are becoming an important choice for solving the major system problems encountered by early adopters in server and storage applications. In order to provide conversion capabilities for connecting multi-system units and PCIe domain in/out protocols, switching and bridging devices are required to have optimized feature sets and architectures to ensure that they meet the need for performance and system cost objectives for switching to serial interconnect architectures.

High-performance interconnection requirements

Current parallel protocols, such as PCI-X and PCI, face high data transfer rates in order to maintain system requirements, requiring severe system and board layout techniques to provide significant computational and peripheral resource challenges. In short, they have reached a point where they have to increase their switching speed, distance, or wiring length. The endless demands on speed and performance, as well as the ever-increasing system cost and time-to-market due to increasing investment in circuit board design and layout, have driven the design toward serial interconnect technology.

High-speed serial interconnect solves key performance, power, layout, and cost issues. Since a single data stream is used to transfer clock and data information, the serial interconnect can achieve extremely fast transfer rates without worrying about traditional clock-to-data or pin-to-pin phase-difference issues due to line length or load mismatch . With serial, point-to-point interconnects, data can be transmitted at very high rates, and a single serial connection can carry the volume of data that previously required a large number of data lines and associated clock and control signals. This results in a relative reduction in the number of pins required and board layout complexity, which reduces system cost.

The emergence and early application of PCI Express

PCIe, built on PCI and PCI-X, has been recognized in the early market where designers transitioned to the serial interconnect architecture. Because each connection provides high-performance, point-to-point serial interconnects with bandwidths up to 16 Gb/s, PCIe is rapidly becoming the de facto interconnect standard for high-performance server, workstation, and storage applications, just like PCI-X and The PCI adoption process is ready to enter the communications and embedded applications market.

While PCIe peripherals and processor architectures provide key building blocks for next-generation system architectures, the advent of switching and bridging solutions has enriched the entire application environment. These solutions also address a number of key issues such as peripherals and I that connect high-performance serial-computing units to previous-generation peripherals and vice-versa for optimal matching of expensive computing resources and protocol conversions. /O connectivity extensions. Vendors with a deep understanding of the system can provide optimized PCIe products that address these issues, as well as high-performance, cost-effective solutions.

PCI Express Switch for I/O Expansion and Resource Efficiency

Although the high-performance serial-connected chipset and system ASICs in the "Northbridge" can fully support today's leading processors, these resources are often limited and rarely meet the high-performance I/O requirements in today's server and storage applications. O Peripheral system requirements. In systems where I/O and peripheral connectivity requirements exceed Northbridge resources, system designers face a dilemma. Adding Northbridge devices can provide the required connectivity, but the increase in system cost is unacceptable. If the North Bridge resources are matched to the I/O slots and peripherals, the processing resources cannot be fully utilized, and the flexibility and functionality of the system are limited, and ultimately it is difficult to be accepted by the market. Figure 1 depicts a first-rate solution. The use of PCIe switches helps enable the North Bridge port to fan out multiple peripherals or slots.

The PCIe switch can allocate limited North Bridge resources to multiple I/O endpoints and can take advantage of the available bandwidth. PCIe switches allow Northbridge to share or share multiple resources with less bandwidth. Alternatively, the switch can cause excessive transmission of a single Northbridge port to provide all available bandwidth for the downstream port to achieve efficient load balancing between devices or slots that manage bursty or non-persistent transfer modes.

The disadvantage of adding a switching unit to the PCIe interconnect architecture is that these additional components potentially increase the system cost and cause the problem of insufficient board space. An effective switching architecture that is optimized for the target application can reduce the impact of these problems. In the server system, the example described here, the PCIe switch can extend, share, and make full use of valuable Northbridge resources. This fan-out switching method can be transformed into the following architectural optimization and advantages. First, the system requires the exchange of a single upstream port to multiple downstream ports. By setting a fixed upstream port and configuring the remaining ports for the downstream port, the complexity of the switching core architecture can be greatly improved, making the chip more economical to use; secondly, the data flow is almost always (90% or higher). Flow from the upstream port to the downstream port and from the downstream device to the upstream port. Peer-to-peer exchanges between downstream ports (transmitted directly between I/Os) only make up a small portion of the traffic. In some emerging systems, prohibiting peer-to-peer exchanges between downstream ports is becoming an ideal requirement to prevent data failure between peer ports and enable digital rights management to ensure that there is no peer-to-peer distribution of revenue-based data. . Associated with the listed upstream and downstream ports is an optimized mode for port transfer from top to bottom and bottom to top, which optimizes the exchange of core resources and reduces the latency of these paths. In addition, reducing the priority of peer-to-peer performance can save the cache and other expensive, performance-driven architectures in the device. Devices that implement these important optimizations only add less than 200ns of latency to the PCIe lanes, while at the same time requiring only a small additional cost for the additional Northbridge devices to fully increase I/O and peripheral performance.

PCI Express Bridge for Protocol Translation

Although PCIe-based processor architectures have increasingly been adopted for server and storage applications, conversion of peripheral devices to PCIe connectivity is still not very common. In addition, in addition to device availability factors, converting a given server or storage platform to PCIe may also be limited by other factors. Storage applications provide a good example of this "hysteresis" and highlight the need for protocol conversion in the form of forward and backward bridging.

This is an example of a forward bridging system. In this configuration, either the PCI connection from the Northbridge or the switching element needs to be converted into parallel PCI-X and PCI protocols to support peripheral connectivity. In many current storage applications, the system's need for increased capacity and reduced search time forces system architects to select the most up-to-date, most capable components. As we discussed earlier, the chipset associated with such processors has shifted to high-performance PCIe interconnects. However, due to some economical reasons, the peripherals of the main storage system, such as the Fibre Channel Host Bus Adapter (HBA), have not yet been converted to serial connections. In such a system, the high-performance processor architecture and the PCI-X-based HBA are irreplaceable and must be used in system units that do not communicate locally. To solve this problem, a bridging solution from PCIe to PCI-X came into being. PCIe to PCI-X/PCI bridge can realize the connection and transmission between serial and parallel domains. In this case the bridge is running forward bridging.

Although some peripheral devices, such as the HBA discussed earlier, have lagged behind the industry-leading processors in the application of PCIe, other storage peripherals have shown inherent serial characteristics and have begun to use PCIe interconnects. Serial ATA (SATA) disk controllers are one of such storage system units that have been marketed. The market demand for these high-performance disks is forcing some mid-life storage applications to have to be upgraded to facilitate sales in their later stages. Due to the complexity of the software and hardware design of the CPU architecture of these systems, the upgrade cycle will not be able to upgrade the processor. In many of these systems, PCI-X is used for processor-to-peripheral interconnections. In the above case, the required processor and peripheral devices require a bridging element to communicate. This is a simplified example of a backward bridging system. PCI-X/PCI-to-PCIe or backward bridging enables a sophisticated processor architecture to support emerging high-performance I/O peripherals using serial interconnect technology.

Adding bridging devices to the system architecture solves the primary problem of protocol conversion. However, the addition of these devices has created the same basic shortcomings as the switch case - additional components affect performance, increase system cost, and lead to board space. Insufficient problem. The choice and optimization of key architectures helps to reduce the impact on system performance and cost. For example, while PCI-X and PCI are bus-based interconnect standards, the high bandwidth requirements of interconnect processors and high-performance I/O can limit the use of point-to-point interconnects to prevent peripherals from being blocked due to bus conflicts. Guarantee signal integrity. PCIe to PCI-X/PCI bridges optimized for high bandwidth requirements and supporting complex bus management and arbitration logic designs can improve and optimize the point-to-point application model for a single peripheral connected to a parallel port. This can achieve higher performance while reducing chip area and cost.

In addition to combining important system specification requirements with the design of the bridge, system designers must make important choices to build a solution that supports a wide range of product forms and functions. Devices that support both forward and backward bridging increase the complexity of the design, but can be used in many applications. The different performance levels available for PCIe and PCI-X/PCI ports help to make the most efficient use of system resources and reduce costs. Since the protocol conversion requirements will likely occur in multiple generations of products or platforms, the choice of system configuration may be the most important optimization task in protocol bridge design.

Future adoption and application models

Because of the variety of technologies involved, the performance requirements for high-performance computing applications, servers, and storage applications that require serial interconnects have become the driving force behind adoption of PCIe and have driven the development of the PCIe ecosystem. When redesigning products with longer lifecycles in the next 18-24 months, communications and embedded applications will begin to be used, and more PCIe system units available for use.
It is estimated that while basic I/O expansion and protocol conversion requirements will still exist, it is expected that communications and embedded applications will adopt a new application model and will require a specific optimized product form to achieve a high-performance, cost-effective system . Chip vendors that provide optimized solutions that integrate their own system knowledge, flexible architecture, and design practices will succeed.

Fast Recovery Diode (FRD) is a kind of semiconductor diode with good switching characteristics and short reverse recovery time. It is mainly used in electronic circuits such as switching power supply, PWM pulse width modulator and inverter, as high frequency Rectifier Diode. Use for freewheeling diodes or damper diodes. The internal structure of the fast recovery diode is different from that of a normal PN junction diode. It belongs to a PIN junction diode, which adds a base region I between the P-type silicon material and the N-type silicon material to form a PIN silicon wafer. Since the base region is thin and the reverse recovery charge is small, the reverse recovery diode has a short reverse recovery time, a low forward voltage drop, and a high reverse breakdown voltage (withstand voltage value).

Fast Diode

Fast Diode,Fast Recovery Rectifier Diode,Fast Recovery Diode,Fast Switching Diode

Dongguan Agertech Technology Co., Ltd. , https://www.agertechcomponents.com

Posted on