TSMC Is Chasing A Trillion-Transistor AI Bonanza


(MENAFN- Asia Times) TSMC has announced that its semiconductor manufacturing operations are rapidly recovering from the disruption caused by the earthquake that hit Taiwan on April 3 and that its revenue target for 2024 remains unchanged. The company's factories were built with a high degree of earthquake resistance.

Management is conducting a comprehensive review of the situation, but as things stand now, we should step back from the headlines and make sure that the 10-year technology development scenario recently laid out by Chairman Mark Liu and Chief Scientist Philip Wong does not get lost in the shuffle.

On March 28, IEEE Spectrum, the magazine of the Institute of Electrical and Electronics Engineers, published an essay,“How We'll Reach a 1 Trillion Transistor GPU ,” which explains how“advances in semiconductors are feeding the AI boom.”

First, note that Nvidia's new Blackwell architecture AI processor combines two reticle-limited 104-billion-transistor graphics processing units (GPUs) with a 10-terabytes-per-second interconnect and other circuitry in a single system-on-chip (SoC).

Reticle-limited means limited by the maximum size of the photomask used in the lithography process, which transfers the design to the silicon wafer. TSMC is therefore aiming for a roughly tenfold increase in the number of transistors per GPU in the coming decade.

The essay starts off with a review of the progress of semiconductor manufacturing and artificial intelligence so far:

  • The IBM Deep Blue supercomputer that defeated world chess champion Garry Kasparovs in 1997 used 0.6- and 0.35-micron node technology.
  • The
    AlexNet
    neural network that won the ImageNet Large Scale Visual Recognition Challenge
    in 2012, launching the
    era of machine learning,
    used 40 nanometer (nm) technology.
  • The AlphaGo software program that defeated European Go Champion Fan Hui in 2015 was implemented using 5-nm technology, as was the initial version of ChatGPT.
  • Blackwell GPUs are made using a refined version of the 4-nm process used by TSMC to fabricate its predecessor, the Nvidia Hopper GPU.

With the
computation and memory
capacity
required for AI training
increasing
by orders of magnitude, Liu and Wong note that“If the AI revolution is to continue at its current pace, it's going to need even more from the semiconductor industry.”

This will require not only moving to the 2-nm process node, which is scheduled for 2025, and then the 1.4-nm (or 14A, A for angstrom) node in 2027 or 2028, but also advancing from 2D scaling to 3D system integration:

MENAFN08042024000159011032ID1108072963


Asia Times

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.