Industry News

    Location: Home > NEWS > Industry News

    Top 10 technology trends in the global semiconductor industry in 2024

    Release Time: 2023-07-12 01:01:01   Click Count:  1424804  【Return】

    Throughout 2023, the haze of the semiconductor industry downtrend still seems to have not dispersed, but the industry has seen a glimmer of light. The emergence of ChatGPT at the beginning of the year triggered a global rush for generative artificial intelligence (AIGC). The rise of AI and large models has spawned diversified landing scenarios, which have greatly helped applications such as data centers and automotive electronics, but also posed new challenges to chip computing power, energy efficiency, storage and integration.


    But it is also these challenges that have spurred the development of semiconductors from materials, design, and manufacturing to packaging technology itself. After the third generation semiconductor silicon carbide and gallium nitride began to be commercialized in a large area, the fourth generation semiconductor gallium oxide began to emerge. With the help of large model Dongfeng, AI chips have become the main battlefield for the self-research of major chip factories and even whole machine factories; In order to obtain more computing power and faster storage speed, Chiplet, 3D-IC, HBM and a series of new memory are also on the agenda. Even programmable optical computing, which has long been called "lab technology," is beginning to try to replace the linear computing part of the GPU.


    AspenCore's global analyst team exchanged views with industry experts and manufacturers during the year, and selected the top 10 technology trends that will emerge or develop rapidly in the global semiconductor industry in 2024 for your reference.


    Trend 1: AI chips are accelerated for generative AI

    2023 is a big year for AI, near the end of 2022 to the beginning of 2023, the generative AI represented by ChatGPT erupted in the application side. The term "generative AI" (or AIGC) is being tossed around in 2023 as if "generative AI" were a prelude to an era of so-called "strong AI." In fact, Nvidia's special addition of Transformer engines for data center Gpus is not something that will happen in 2023, but it is clear that this early layout provides the foundation for the underlying computing power acceleration of generative AI.


    The reason why generative AI is described as "explosion", from the chip point of view, shortly after the emergence of GPT, Stable Diffusion and other models, almost all the world's large computing power AI chips for data centers - regardless of training or reasoning, the market of relevant enterprises seems to have rewritten the script. Almost every household is advertising that their chips can provide computing power for generative AI and cooperate or support different large models. The WAIC World AI Conference 2023 is almost exclusively about generative AI.


    Not only data centers, but also edge and end-side AI chip companies are talking about generative AI concepts one after another. From Intel to the next year AI PC laid half a year of propaganda - the PC processor in 2024 will also fully integrate a dedicated AI acceleration unit, to the end of 2023, Mediatek shouted mobile phone generative AI chip - mobile phones can also reason in the local generative AI model, and even some embedded application chip companies are also talking about generative AI.


    In fact, even if we don't talk about generative AI, the AI whirlwind it brings has greatly re-driven the edge AI craze: traditional MCU/MPU manufacturers including TI, Renesas, Infineon, and so on, have re-emphasized the great value of edge AI today. This trend, as well as the popularity of generative AI in the data center and PC/ mobile phone, will continue until 2024 and be further developed, and even when the application end is working, it will bring more possibilities for the digital transformation of the whole society.


    Trend 2: Using Chiplet technology to support computing power expansion becomes a mainstream trend

    As Moore's Law slows down and new applications such as AI, autonomous driving, and data centers put forward higher requirements for storage and computing power, the continuous evolution of advanced chip processes alone is no longer sustainable, and Chiplet and 3D heterogeneous integration will provide new growth drivers for breaking through the bottleneck of integrated circuit development. In 2023, under the promotion of chip giants such as TSMC, Samsung, Intel, and industrial chain enterprises, each link of Chiplet industrial chain is gradually improved. A complete Chiplet ecological chain is formed, which consists of Chiplet system design, EDA/IP, core (core, non-core, IO Die, Base Die), manufacturing, sealing and testing.


    At present, the global semiconductor technology giants are actively launching products including Chiplet, such as Tesla Dojo deep learning and model training chip, AMD MI300 APU accelerated graphics card, Nvidia Ampere A100 GPU and so on. Domestic computing chip manufacturers are also following up on the layout. In 2024, with the continuous development of AI large models, the use of Chiplet technology to customize efficient expansion computing power will become a mainstream trend, and will also be used in the future of board level multi-chip interconnection or even larger multi-board multi-cabinet interconnection solutions.


    However, although Chiplet is becoming one of the key technologies to meet today's computing needs, there are still many design challenges, such as interconnection, heat dissipation, yield, warping, passive device integration, parasitic efficiency, cost, reliability, and more. The integration of multi-CHIplet can be realized effectively through packaging technology, including the design, production and verification of high-density advanced packaging, the design and verification of high-speed channel, power supply scheme, heat dissipation scheme, stress scheme, reliability and so on. At the same time, the limitations of Chiplet application are still obvious, mainly in that Chiplet is still dominated by the vertical system of an international manufacturer, the related design system is relatively closed, and the interconnection standard is still to be improved.


    Trend 3: HBM, price and volume rise

    With the rapid rise of artificial intelligence/machine learning (AI/ML) around the world, in 2020, ultra-bandwidth solutions represented by high-bandwidth memory (HBM, HBM2, HBM2E, HBM3) began to gradually emerge. After entering 2023, the crazy expansion of the generative artificial intelligence market represented by ChatGPT has increased the demand for AI servers rapidly, but also led to an increase in the sales of high-end products such as HBM3.


    Omdia research shows that from 2023 to 2027, the annual growth rate of HBM market revenue is expected to soar by 52%, and its share of DRAM market revenue is expected to increase from 10% in 2023 to nearly 20% in 2027. Moreover, the price of HBM3 is about 5-6 times that of standard DRAM chips, which is why HBM shipments in 2023 accounted for only 1.7% of the total DRAM shipments, but its sales proportion reached 11%, and Nvidia, AMD, Microsoft, Amazon and other chip manufacturers queue up to grab goods, and even premium reasons can be considered.


    HBM technology, introduced in 2013, is a high-performance 3D stack DRAM architecture with data transfer rates of around 1Gbps. Since then, the technical standard has been updated almost every 2-3 years, enabling the bandwidth and highest data transfer rate records for the second generation (HBM2), third generation (HBM2E), fourth generation (HBM3) and fifth generation (HBM3E) products to be continuously refreshed. Given that the bandwidth of other products only increased by two or three times during the same period, it is reasonable to attribute the rapid development of HBM products to the fierce competition among memory manufacturers.


    At present, as an important technological innovation, the development prospects of HBM are quite bright, especially in artificial intelligence training applications. However, compared with the rate of GDDR DRAM at 16/18Gbps, the rate of HBM3 even if it reaches 9.2Gbps, there is still a gap, and the reasons for limiting the development of HBM mainly come from two aspects: one is the intermediary layer, and the other is the complexity and increase in manufacturing costs brought by 3D stacking. However, we believe that with the deep involvement of global storage giants, the above challenges will eventually be solved, and the fierce battle in the HBM market will be intensified.


    Trend four: Satellite communication technology has taken a big step forward, 6G has taken shape

    In last year's forecast, we mentioned that mobile satellite communications technology will begin to roll out in 2023. Now this technology has taken another big step forward with Huawei's attack on RF antenna technology. With the launch of Huawei's Mate60Pro series, the mobile phone industry has entered the era of satellite calling from the point-to-point, one-way satellite short message mode.


    In the past, people mainly focused on 5G and its chips, ignoring satellite communications. At present, a series of companies developing satellite communication chips such as Hualichuangtong and Haig have developed rapidly.


    In terms of SoC, Unigroup also launched the first 5G satellite communication chip V8821, which is in line with the IoT NTN R17 standard, supports L-band maritime satellites and S-band Tiandong satellites, and can be extended to support access to other NTN satellite systems, which can provide data transmission, text messages, calls and location sharing functions. In addition to being used in smartphones directly connected to satellites, it can also be used in the Internet of Things, wearable products, car networking and so on.


    Also at MWC2023, Mediatek introduced the MT6825 IoT-NTN chipset, which can be connected to geosynchronous orbit (GEO) satellites for easy conversion to 3GPP NTN standard satellite networks. In August 2023, Mediatek released the latest 6G NTN technical white paper on the integration of satellite network and ground network. In the future, through the compatibility and complementary of satellite network and ground network, it will create a three-dimensional network coverage of land, sea and air all terrain and all space, and provide users with seamless intelligent communication services.


    At this point, with the continuous breakthrough of satellite communication technology in the fields of mobile phones and the Internet of Things, the future 6G has taken shape. 2024 will be the year that satellite communication technology will fully blossom.


    Trend 5: The commercialization of gallium oxide is coming

    At present, the development of wide band gap semiconductors is sweeping, among which gallium oxide, as the fourth generation semiconductor, has gradually emerged. Compared with diamond and alumina, which are the fourth generation semiconductors, gallium oxide has been able to achieve a breakthrough in larger wafer size, and it is predicted that the market size of gallium oxide is expected to exceed the scale of gallium nitride devices by 2023.


    Gallium oxide has five confirmed crystal forms, of which the most stable is β-gallium oxide, most of the current research and development is also for β-gallium oxide, gallium oxide has the characteristics of high breakdown field strength, the on-resistance is much lower than gallium nitride, silicon carbide, can effectively reduce the on-loss of devices.


    The growth process of gallium oxide can use the liquid melt method at atmospheric pressure, which has a cost advantage in manufacturing. At present, the development prospect of gallium oxide is becoming increasingly prominent, and the market is dominated by two giants, Novel Crystal Technology(NCT) and Flosfia in Japan. At present, the industry has successfully mass-produced 4-inch gallium oxide wafers, and it is expected to expand to 6-inch wafers in the next few years. At the same time, the commercial development process of β-gallium oxide Schottky diodes is accelerating.


    In the power electronics market, gallium oxide has overlaped with gallium nitride and silicon carbide applications, and the on-board rate of current power devices at the vehicle scale has been increasing year by year, which also provides opportunities for gallium oxide in larger application scenarios. In the short term, it also has great potential in consumer electronics, home appliances, and highly reliable and high-performance industrial power supplies. While gallium oxide opens up new possibilities, materials such as silicon carbide and gallium nitride have also had their own unique advantages and applications. With the continuous progress of scientific research and technology and the continuous expansion of application scenarios, gallium oxide is expected to play a greater importance in the semiconductor field.


    Trend six: Upstream and downstream actively promote 3D-IC commercial process

    Over the past 50 years, Moore's law has driven the development of the entire semiconductor industry, and now due to the slow upgrading of advanced processes, high research and development costs, it is not possible to double the number of transistor integrations every 18 to 24 months as in the past, resulting in semiconductor performance improvements facing bottlenecks. However, advanced packaging continues to evolve, from MCM in 1970 to SiP, to 2.5D and now 3D-IC, heterogeneous integration, opening another door for innovation in the semiconductor industry.


    The PCB board level package 3D-IC that appeared a few years ago is not a new thing, and now we are talking about two wafer stack (WoW) 3D-IC, only so that the bandwidth of inter-chip communication will be greater. However, 3D-IC has not yet achieved large-scale commercial use, mainly facing two biggest challenges: heat dissipation; Chip surface tension problem.


    Complex, compact, density increases, resulting in 3D-IC internal heat is more difficult to disperse than traditional 2D chips, different processes between the grain stack, the stress is also strange. Not only need special D2D interface IP, the use of TSV technology to achieve high-speed and efficient data communication between chips, but also need to use appropriate EDA tools before bonding thermal analysis, stress analysis, to help chip design engineers complete system integration optimization.


    Throughout 2023, the industry is also actively accelerating 3D-IC progress and system-level innovative deployments. For example, TSMC has launched the new 3Dblox 2.0 open standard, and the 3DFabric platform gives customers the freedom to select and choose technologies related to the front and back assembly and testing of 3D-IC, including integrated System on Chip (SoIC), Integrated Fan Out (InFO), and CoWoS. Umc established the W2W 3D IC project with Huabang, Zhiyuan, ASE and Cadence to accelerate the production of 3D packaging products, and is expected to complete system-level verification in 2024; Sunsys has partnered with LPC to launch a new W2W and Wafer Stack chip (CoW) solution that enables developers to stack and bond DRAM memory directly onto chips.


    Trend 7: Micro OLED enters the eve of large-scale application

    Although Micro LED is the most perfect display technology, it still faces technical challenges such as massive transfer, full color, detection and repair, and cannot be applied at scale in the short term. Micro OLED is a deep combination of display technology and semiconductor technology, that is, a close combination of CMOS technology and OLED technology, and a high degree of integration of inorganic semiconductor materials and organic semiconductor materials. Although Micro OLED also has many technical problems, especially the CMOS process and OLED technology belong to different processes, both professional and complex, integration technology requirements are strict, but compared with Micro LED, its large-scale application possibility is greater and earlier.


    Compared with the current mainstream VR/AR display technology FAST-LCD, Micro OLED technology also has many advantages, mainly reflected in low power consumption, wide operating temperature, high contrast, Fast response speed and other performance, almost make up for the shortcomings of FAST-LCD, is the most suitable for near-eye display micro-display technology. Apple released the Vision Pro headset with a Micro OLED display at the 2023 Worldwide Developers Conference, which is bound to promote the commercial application of this technology.


    However, Micro OLED because of its organic light-emitting material properties, there are two natural technical obstacles: brightness and life, that is, like other OLED technology, there are screen burning, short life problems. However, given that Micro OLED is mainly used in the field of consumer electronics, the lifetime of this defect, like the application of OLED display in smartphones, will not have much impact. The brightness of Micro OLED cannot meet the requirements of complete simulation of VR/AR devices.


    In 2023, global Micro OLED manufacturers actively expand the production of Micro OLED 8-inch and 12-inch production lines, and some have achieved mass production, and it is expected that more Micro OLED screens will be provided for virtual reality terminal applications in 2024. At the same time, given that Micro LED cannot be mass-produced in the short term, Micro OLED has the opportunity to become the mainstream technology of micro display in the next period of time.


    Trend eight: "Programmable optical computing chips" respond to skyrocketing demand for computing power

    The wave of generative AI has promoted the skyrocketing demand for computing power, but as Moore's Law approaches its limit, the former electronic technology has been difficult to meet the needs of a new round of scientific and technological revolution, and people have begun to seek to "change electricity into light" to further enhance computing power.


    Optical chips have long existed, but most of them are non-programmable optical linear computing units, and in order to improve computing power through light, computing units must have programmability. This optical computing chip did not make a breakthrough until 2017, when Shen Yichen and others published a paper in the journal Nature Photonics, proposing a new computing architecture based on optical neural networks.


    The programmable optical computing chip has the advantages of high integration, fast speed/low delay, low energy consumption, good at AI matrix calculation, large cost reduction potential, and excellent waveguide transmission performance. Challenges also exist, such as complex computing requires the use of a large number of optical devices, resulting in more complex structures and larger sizes; The cost, stability and yield challenges associated with higher process integration required to control each device for programmable implementation; And the impact of ambient temperature on calculation accuracy, temperature control challenges and so on.


    The main idea of the commercialization of silicon optical chips is the technical versatility of silicon optical, such as giving priority to the computing core part of the GPU for linear computing, replacing it with the optical computing core, forming a new paradigm of photoelectric hybrid computing power network, and minimizing the learning cost and use threshold of customers.


    The second is the modularization of optical chips, which pursues the "plug and play" of optical modules transmitted between chips while satisfying computing applications. This also involves using on-chip and on-chip optical networking technologies to replace electrical interconnection between modules with the advantages of low latency and low energy consumption of light, and wafer-level optical interconnection networks can achieve higher utilization when mapping computing tasks to different chips.


    Trend 9: New memory, from theory to practice

    The development of the Internet of Things and artificial intelligence has led to an explosion of information, and all data must be collected, processed and transmitted, stored and analyzed at multiple levels, from the edge to the cloud. On the other hand, Moore's Law is facing a rapid slowdown in the rate of expansion and can no longer provide simultaneous increases in power, performance and area cost (PPAC).


    In this context, enterprises of all sizes began to compete to develop new hardware platforms, architectures and designs to improve computing efficiency, and new memory technologies represented by MRAM(magnetic random access memory), PCRAM(phase change random access memory) and ReRAM(resistive memory). This is one of the key areas that chip and system designers are working on. These new types of Memory both provide more tools to enhance Near Memory Compute and are building blocks for the next stage of In-Memory Compute.


    Whether as standalone chips or embedded in ASics, microcontrollers (MCUS) and computing processors, they have the potential to become more competitive than existing mainstream memory technologies. Up to 90% power savings can be achieved if eFlash and SRAM in microcontrollers are replaced with embedded MRAM. If a single transistor MRAM is used instead of a six-transistor SRAM, higher bit density and smaller chip size can be achieved, and these power and area cost advantages will make MRAM a strong competitor for edge-side devices. Compared with traditional NAND flash memory, PCRAM or ReRAM storage can provide more than 10 times the access speed, which is more suitable for storing data in the cloud.


    However, there are also some key common problems in these emerging memories. For example, at the unit level, there are contradictions between thermal stability, write current and fatigue characteristics, which need to be overcome by material selection, integrated process and comprehensive optimization of circuits. From the perspective of array architecture, there is crosstalk problem caused by leakage in cross array structure. According to the current research progress, the cutting-edge technologies such as heterogeneous structure design of phase change materials and spin orbital moment (SOT) are expected to better solve the above challenges.


    Trend 10: The availability and commercialization of silicon-based quantum computing

    Many companies and institutions working on quantum computers have focused on materials such as superconducting bits. In recent years, more and more research institutions have focused on the direction of silicon-based quantum computing. After all, silicon is still a more readily available material, which gives it a natural advantage. And for silicon-based quantum computing, qubits can be single electrons or very small.


    Silicon-based quantum computing is much easier to mass-produce, even if it is weaker in terms of operation time (similar to gate delay) than superconducting qubits. It's been a fruitful two years for silicon-based quantum computing. In 2022, the field of silicon-based quantum computing has made some relatively big technical breakthroughs, including the implementation of quantum computing with an extremely low error rate, so that this computing technology has the potential value of being scalable and truly used in computing. There are also studies that demonstrate longer spin qubit coherence times - and the platforms they demonstrate are compatible with CMOS manufacturing.

    There are several major events in silicon-based quantum computing in 2023. In June, IBM announced that quantum computers had entered the "utility" phase; In September, Australia's chief scientist Cathy Foley said she had seen "the dawn of the quantum age". Meanwhile, physicist Michelle Simmons won Australia's top national science award for developing a silicon-based quantum computer.


    In terms of commercialization, it is representative of Intel's research on quantum computing in recent years, which is naturally based on its accumulation of transistor design and manufacturing, and is based on silicon. In addition, there are also companies including Quantum Motion and Silicon Quantum Computing to research and develop silicon quantum computers. There may be further commercialization of silicon-based quantum computing in 2024.