Chip - China Academy https://thechinaacademy.org an intellectual content network dedicated to illustrating how key dynamics shape China's view on the world Fri, 27 Dec 2024 02:12:47 +0000 en-US hourly 1 https://wordpress.org/?v=6.2 https://thechinaacademy.org/wp-content/uploads/2023/03/cropped-WechatIMG843-32x32.png Chip - China Academy https://thechinaacademy.org 32 32 213115683 China Finally Losing Hope in NVIDIA https://thechinaacademy.org/china-finally-losing-hope-in-nvidia/ https://thechinaacademy.org/china-finally-losing-hope-in-nvidia/#respond Thu, 26 Dec 2024 18:00:00 +0000 https://thechinaacademy.org/?p=100033982 Despite being treated unfairly by NVIDIA, China waited until the US and EU launched antitrust probes to act.

The post China Finally Losing Hope in NVIDIA first appeared on China Academy.

]]>
A few days ago, NVIDIA’s Tmall flagship store suddenly became empty. While the store and search function remained, all the products for sale mysteriously disappeared.

Clearly, NVIDIA, known for “selling shovels next to a gold mine,” has run into major trouble.
It’s apparent to the discerning eye that this is a chain reaction.

On December 9th, NVIDIA was formally investigated for suspected violations of China’s Anti-Monopoly Law and the Notice on the Anti-Monopoly Review Decision Concerning the Acquisition of Mellanox Technologies Ltd. by NVIDIA Corporation.

Although no official conclusions have been reached, looking back at the origins reveals some clues.

In 2019, NVIDIA acquired Mellanox Technologies, a chip manufacturer specializing in network adapters, switches, software, and chips that significantly boost device interconnectivity speed, enhancing operational efficiency for internet companies and the finance industry.

During its peak in 2015, Mellanox Technologies held an 80% market share in the global InfiniBand market, with even tech giants like Alibaba and Baidu becoming major clients.

In response to this powerful merger, the State Administration for Market Regulation imposed additional restrictive conditions, requiring compliance with certain obligations.

• When selling NVIDIA GPU accelerators and Mellanox high-speed network interconnect devices to the Chinese market, no forced bundling or imposition of unreasonable transaction conditions shall be allowed. Customers must not be hindered or restricted from purchasing or using the aforementioned products separately. Discrimination against customers purchasing these products individually based on service levels, prices, software features, etc., is prohibited.

• Supply NVIDIA GPU accelerators, Mellanox high-speed network interconnect devices, related software, and accessories to the Chinese market based on the principles of fairness, reasonableness, and non-discrimination.

• Ensure the continued interoperability of NVIDIA GPU accelerators with third-party network interconnect devices and Mellanox high-speed network interconnect devices with third-party accelerators.

• Maintain the open-source commitment for Mellanox high-speed network interconnect device point-to-point communication software and collective communication software.

• Implement protective measures for the information of third-party accelerator and network interconnect device manufacturers.

However, it appears that NVIDIA may have violated these commitments during subsequent sales processes.

If the investigation proves true, according to Article 63 of the Anti-Monopoly Law, NVIDIA could face fines of up to $7.8 billion (approximately 57 billion RMB) for particularly severe violations leading to significant consequences.

This isn’t the only antitrust investigation NVIDIA is facing, as actions have already been taken in the US, UK, and France. With China now involved, it seems NVIDIA has offended four of the five permanent members of the UN Security Council, almost rivaling the status of the late Gaddafi.

Despite these challenges, NVIDIA remains firm in its stance, declaring, “NVIDIA prevails with strength!” and expressing willingness to address any regulatory agency’s inquiries regarding their business practices.

NVIDIA’s confidence is backed by their dominant position in the high-end GPU market, where they lead in computing power without any real competition. Their global GPU market share stands at nearly 90%, a validation from users through their purchasing power.

Apart from their core GPU business, NVIDIA has achieved comprehensive domination in the AI industry through ecosystem development and strategic acquisitions, covering everything from software (CUDA) to complementary facilities (network cables).

While their market dominance is undeniable, it is not a license for NVIDIA to set exorbitant prices or engage in coercive bundling practices.

Reports have surfaced stating that NVIDIA earns a whopping 1000% profit margin for each H100GPU sold. Meanwhile, customers struggle under the weight of high markups and supply shortages.

Even Elon Musk, with a net worth of $400 billion, has personally criticized NVIDIA, stating that acquiring NVIDIA GPUs is more challenging than obtaining certain illicit substances!

Leveraging these tools of productivity, NVIDIA has amassed a market value exceeding $3 trillion, boasting a staggering gross profit margin of 74.6% in their financial reports, with revenue and profits nearly doubling each quarter.

This time, Moore’s Law failed to continue shining in the chip industry, but instead, it has brought NVIDIA a windfall.

In terms of popularity, NVIDIA has also earned the status of a “global enemy.”

In August this year, NVIDIA first faced an antitrust investigation initiated by the US Department of Justice, followed by scrutiny from the EU anti-monopoly regulatory agency on December 7th, and subsequently, China’s State Administration for Market Regulation also initiated an investigation.

Among the many “victims” NVIDIA is facing, China undoubtedly has the most reason to hold it accountable.

It is widely known that NVIDIA treats the Chinese market differently. Products that are available in other countries are not readily accessible to Chinese companies!

Moreover, NVIDIA’s blatant double standards are a clear violation of the law.

When NVIDIA acquired Mellanox four years ago, the conditions we set included “supplying NVIDIA GPUs to the Chinese market based on the principles of fairness, reasonableness, and non-discrimination.”

Unlike the EU’s imposition of tariffs on Chinese electric vehicles, our investigation into NVIDIA for antitrust violations is not a matter of “taking sides” or promoting local protectionism.

For instance, in a recent draft from the Ministry of Finance, there is a provision stating: “In government procurement, domestic products will receive a 20% price evaluation advantage.”

The so-called domestic products must meet the following three conditions: manufactured within China, with a specified percentage of domestically produced components, and meeting requirements for specific key components and processes.

Those familiar with the international situation may have noticed similarities between this and the US 2022 “Inflation Reduction Act,” which imposes similar restrictions on consumer tax credits for purchasing electric vehicles.

The act specifies that battery components must be mined, processed, and assembled primarily in the US, Canada, Mexico, or countries with free trade agreements with the US.

At first glance, our restrictions on “domestic products” seem a bit like crossing the river by feeling our way to America once again.

However, upon closer examination, it becomes clear that it is an entirely different matter.

The US “Inflation Reduction Act” explicitly prohibits the inclusion of battery components assembled or produced by “sensitive foreign entities.” It’s essentially a direct statement against using Chinese batteries.

Even Tesla vehicles manufactured in the US lose out on subsidies if their battery packs are sourced from China.

The “Inflation Reduction Act” applies to consumer purchases, but more stringent restrictions have been implemented for government procurement by the US government.

In the 1933 Buy American Act, federal agencies were required to give preference to products that were “substantially all” mined, produced, or manufactured in the United States, with a specific domestic content requirement of 50%.

In recent years, there have been more detailed strategies specifically targeting Chinese products.

In the field of semiconductors alone:
• In 2019, the US sanctioned Huawei, prohibiting unauthorized American companies from selling products and technology to the company.
• In 2020, any entity (including non-US companies like TSMC) was banned from supplying semiconductor products containing US technology to Huawei.
• In 2022, comprehensive restrictions were announced on selling advanced chips to China.
• In 2023, through “long-arm jurisdiction,” China’s purchase of high-end chips from several dozen third countries was limited.
• In early December of this year, the US announced a new round of semiconductor export controls to China, with 140 Chinese companies listed on the Entity List.

The US sanctions have escalated and intensified, moving from targeting a single company to mobilizing all available resources, gradually…

In our consultation draft, it explicitly states the need to “treat all types of businesses equally”: even foreign enterprises, as long as they are manufactured/assembled domestically and meet a certain level of localization, can be considered “Chinese-made” and enjoy a 20% price discount!

This reveals a strategy to support the industry chain, encouraging competition between Chinese and foreign products. This starkly contrasts with the approach of European and American countries that wield the tariff stick against Chinese electric vehicles.

Indeed, NVIDIA’s differential treatment in the Chinese market may be somewhat “coerced,” given the circumstances of operating in the United States. However, to argue that NVIDIA is innocent would also mean acknowledging the unjust sanctions imposed on Chinese companies like Huawei.

In the face of the trade war instigated by the United States, we will not stand idly by or surrender.

If the US prohibits NVIDIA from selling high-end chips to China, we will also restrict the export of rare metals like gallium, germanium, and antimony to the US. The principle of reciprocity is essential in diplomatic relations.

If one believes that the actions of Europe and America represent the practices of the “civilized world,” then China, following suit in launching an antitrust investigation against NVIDIA after the US and Europe, is aligning with the “civilized world’s” footsteps.

Everything is just getting started.

As trade tensions escalate, the old world order is crumbling.

Advocates of the global market and free trade were not wrong; they simply lived in an era when China had just joined the WTO, with fewer restrictions on global capital flows.

However, times have changed. The US began imposing tariffs on Chinese goods in 2018, the UK officially exited the EU in 2020, and the EU is targeting Chinese electric vehicles in 2024.

Those who once established rules are now the first to break them, while we continue to abide by the ethics of the old world for a considerable time.

In this ongoing trade war, we maintain restraint, aiming to lead by example and uphold a fair and just global market order.

As the world’s largest industrial nation, an open market and free trade also benefit us.

When we proclaim that “American chips are no longer secure or reliable” and prepare to procure from the government in this sensitive sector, supporting domestic products in written form, it suggests that the existing market order may be in jeopardy.

Recognize reality, abandon illusions. Believing that friendly development can be achieved simply by adhering to open cooperation is wishful thinking.

The chaotic era of rampant trade protectionism will end when, nobody knows, just as we were not the first to fire the gun.

The post China Finally Losing Hope in NVIDIA first appeared on China Academy.

]]>
https://thechinaacademy.org/china-finally-losing-hope-in-nvidia/feed/ 0 100033982
China is the Last Hope to Stop NVIDIA’s Monopoly https://thechinaacademy.org/china-is-the-last-hope-to-stop-nvidias-monopoly/ https://thechinaacademy.org/china-is-the-last-hope-to-stop-nvidias-monopoly/#comments Wed, 11 Dec 2024 18:00:00 +0000 https://thechinaacademy.org/china-is-the-last-hope-to-stop-nvidias-monopoly/ The AI industry is a vast market, yet there is nowhere to retreat.

The post China is the Last Hope to Stop NVIDIA’s Monopoly first appeared on China Academy.

]]>

On December 9, China launched an antitrust investigation into the US chipmaker Nvidia. The company could face fines of up to $1.03 billion, according to the South China Morning Post.

Global Times reported that the investigation primarily concerns an acquisition. In 2019, Nvidia announced its $6.9 billion acquisition of Mellanox Technologies, the largest acquisition deal of that year. The concern was that Nvidia’s purchase of Mellanox would enable it to complete a near-monopoly in the AI industry, a market the US International Trade Administration estimated that will to add $15 trillion to global economy by 2030, which also drew scrutiny from antitrust authorities in the European Union and the United States. However, it seems China is the only country to stop Nvidia’s monopoly.

To understand why, we need to first grasp how Nvidia’s monopoly operates:

To train a competitive AI, three core components are essential: hardware, software, and communication technology. Nvidia has established a global monopoly over the first two.

In terms of hardware, Nvidia’s H100 is currently the best-selling AI training chip worldwide. According to Nasdaq, Nvidia sold an estimated $38 billion worth of H100 GPUs in 2023, as companies raced to acquire the chips for training large language models. This surge in demand propelled Nvidia to the forefront of the AI chip market, securing a market share of over 90%.

Mizuho Securities estimates that Nvidia controls between 70% and 95% of the AI chip market, specifically for training and deploying models like OpenAI’s GPT. The H100, when purchased directly from Nvidia, is priced at approximately $25,000. Nvidia’s pricing power is reflected in a remarkable 78% gross margin—This vividly demonstrates how much Nvidia exploits technology companies after gaining a monopoly position.

In terms of software, Nvidia’s most formidable competitive advantage is the Compute Unified Device Architecture (CUDA).

The competition for AI models initially stemmed from the rivalry between Google and Meta. Engineers discovered that while CPUs excel at general computing and meet the requirements for inference tasks in AI, they were insufficient for handling the large-scale parallel computing tasks required for deep learning, especially for training large models. GPUs, with their powerful parallel processing capabilities, were better suited for this purpose. However, their programming models and memory access patterns differed significantly from those of CPUs, creating considerable development challenges.

To solve this, Nvidia introduced CUDA in 2007, enabling developers to use C/C++ to tap into the parallel processing power of GPUs for non-graphical workloads. This innovation laid the foundation for deep learning, prompting major frameworks like TensorFlow and PyTorch to integrate native support for CUDA early on.

CUDA quickly became the most efficient path to harness the computational power of GPUs. According to the Netflix Technology Blog, using a custom CUDA kernel, the training time for a neural network on a cg1 instance was reduced from over 20 hours to just 47 minutes when processing 4 million samples.

In the academic world, most papers demonstrating innovations in neural networks defaulted to using CUDA acceleration when conducting GPU-based experiments, further cementing its dominance in the emerging deep learning community.

Meanwhile, Qualcomm, Intel, and Google have reportedly teamed up to offer oneAPI as an alternative to Nvidia’s CUDA, but these efforts have largely faltered. The reason for this is simple: once developers invest in the CUDA ecosystem, switching to other GPU frameworks becomes a daunting challenge. It requires rewriting code, learning new tools, and often re-optimizing the entire computing process. These high switching costs make it more practical for many companies and developers to continue relying on Nvidia’s products, rather than risk exploring alternative solutions.

Even tech giants like Google, with the resources to invest heavily in custom ASICs, have struggled to replace CUDA. In 2018, Nvidia GPUs accounted for over 90% of Google’s TPUv2 infrastructure, despite the company’s substantial investments in custom hardware.

In terms of communication technology, Nvidia’s acquisition of Mellanox has raised concerns in China, the US, and the EU.

As AI models continue to grow in size, large language models now require hundreds of gigabytes, if not terabytes, of memory just for their model weights. For example, production recommendation systems deployed by Meta require dozens of terabytes of memory for their massive embedding tables. A significant portion of the time spent on training or inference for these large models isn’t dedicated to matrix multiplications, but rather to waiting for data to reach the compute resources.

To address this challenge, InfiniBand—a computer networking standard used in high-performance computing that boasts extremely high throughput and low latency—has been introduced into the AI training industry. According to The Institute of Electrical and Electronics Engineers (IEEE), InfiniBand now dominates AI networking, accounting for roughly 90% of deployments.

Mellanox has been the leading supplier of InfiniBand technology. As of 2019, Mellanox connected 59% of the TOP500 supercomputers, with a year-over-year growth of 12%, showcasing its dominance and continued advancement in InfiniBand technology.

Jensen Huang met with Mellanox CEO Eyal Waldman

By acquiring Mellanox, Nvidia secures the “holy trinity” of the AI industry—domination in GPU chips, development tools, and communication technologies for distributed computing. This acquisition further strengthens Nvidia’s monopoly in AI, creating a snowball effect that makes it increasingly difficult for competitors to break through.

When China’s Administration for Market Regulation approved Nvidia’s acquisition of Mellanox in April 2020, it imposed additional restrictive conditions. These included prohibiting Nvidia from bundling GPUs and networking devices, and from discriminating against customers who purchase these products separately in terms of price, function, and after-sales service. However, in June 2022, Nvidia explicitly stated in the user agreement for CUDA 11.6 that it bans the use of CUDA-based software on third-party GPUs. This effectively forces developers using AMD and Intel chips to switch to Nvidia’s GPUs, prompting China to launch an investigation into Nvidia last week for potential violations of antitrust law.

By now, many of you may understand why China is investigating Nvidia. However, the question remains: why is China only starting this review now, two years after Nvidia allegedly broke the law? This delay can be attributed to three key factors.

Firstly, China’s chipmakers have finally developed the technology to challenge Nvidia.

The Nvidia H100 GPU is manufactured using TSMC’s N4 process, which is categorized as a “5 nm” process by the IEEE International Roadmap for Devices and Systems. While, ASML is only allowed to sell DUV machines to China, primarily used for producing 7 nm chips.

According to Bloomberg, SiCarrier—a Chinese chipmaking equipment developer collaborating with Huawei—secured a patent in late 2023 involving Self-Aligned Quadruple Patterning (SAQP). This breakthrough allows for certain technical achievements akin to those seen in 5 nm chip production. Business Korea argued in May that chips made using such techniques would cost four times as much as those produced with EUV lithography, Huawei’s actions appear to bust this claim. On November 26, Huawei launched its Mate 70 Pro, with the 1TB version priced at 7,999 CNY—the same price as the Mate 60 Pro with similar specifications released the previous year.

On December 9, Huawei’s executive director, Yu Chengdong, publicly announced that the chips in the Mate 70 series are 100% made in China. Technode reported that the Huawei Mate 70 Pro’s CPU, the Kirin 9020, outperforms Qualcomm’s Snapdragon 8+ Gen 1, which was released in 2022 and manufactured using TSMC’s N4 process. According to insiders, while the Kirin 9020 chip may still uses a 7nm transistor process, its advanced packaging technology has greatly enhanced computing efficiency.

The successful launch of the Huawei Mate 70 Pro demonstrates that Chinese chipmakers can now produce chips competitive with TSMC’s 5nm technology in large quantities and at competitive prices. This achievement also positions them to extend their expertise to GPUs, provided they adapt their designs to meet the specific requirements of each processor type. This advancement suggests that Chinese chipmakers are nearing the capability to produce GPUs with hardware performance comparable to Nvidia’s H100.

Moreover, The Financial Times reports that China’s biggest chipmaker SMIC has put together new semiconductor production lines in Shanghai, aiming to produce 5nm chips. Although 5nm chips remain a generation behind the current cutting-edge 3nm ones, the move would show China’s semiconductor industry is still making gradual progress, despite US export controls.

Secondly, Chinese AI companies are increasingly positioned to reduce their dependence on CUDA.

American companies like Google and Meta remain heavily reliant on CUDA because it offers the best acceleration performance for Nvidia’s H100 chip, which dominates the AI hardware market. However, the CHIPS and Science Act, signed by President Biden, prohibited Nvidia from exporting H100 chips to Chinese companies after 2022. This restriction has forced Chinese technology giants such as Baidu and Tencent to explore alternatives, including AMD GPUs and domestically developed GPU chips, effectively reducing their reliance on Nvidia’s CUDA ecosystem.

In addition, Moore Threads, a Chinese GPU design company, launched its Moore Threads Unified System Architecture, MUSA, on November 5. The MUSA architecture, a serious challenger to CUDA, provides a high-performance, flexible, and highly compatible computing platform that supports various parallel computing tasks, including AI computation, graphics rendering, multimedia applications, and physical simulation. The company also provides a wealth of development tools and libraries, such as MUSA SDK, AI acceleration libraries, communication libraries, etc., to help developers better develop and optimize applications. Moreover, MUSA is compatible with CUDA’s software stack interface, significantly easing the process of porting applications and lowering the cost for enterprises to move away from Nvidia products.

Moore Threads’ MTT S4000 AI GPU is already available in December 2023

Thirdly, InfiniBand technology is becoming outdated compared to Chinese Ethernet advancements.

While InfiniBand currently dominates AI networking with approximately 90% of deployments, IEEE reports that Ethernet is emerging as a strong contender for AI clusters. For instance, InfiniBand often lags behind Ethernet in terms of maximum speeds. Nvidia’s latest Quantum InfiniBand switch reaches 51.2 Tb/s with 400 Gb/s ports, whereas Ethernet achieved 51.2 Tb/s nearly two years ago and now supports port speeds of up to 800 Gb/s.

One challenge for Ethernet adoption has been its inability to handle the massive workloads of AI training and other high-performance computing (HPC) applications. The high traffic levels in data centers can lead to bottlenecks, causing latency issues that make it unsuitable for these tasks.

However, on September 27, during the 2024 China Computational Power Conference, state-owned China Mobile and 50 other partners introduced Global Scheduling Ethernet (GSE)—a new networking protocol designed to handle large data volumes and provide high-speed transfers tailored to AI and other HPC workloads.

Since Nvidia’s acquisition of Mellanox in 2019, there are no longer independent suppliers of InfiniBand products. In contrast, Ethernet has a diverse range of suppliers worldwide. If China Mobile successfully promotes Ethernet technology as a replacement for InfiniBand in AI applications, it could provide AI companies globally with access to local suppliers, potentially reducing costs and fostering competition.

China is not the only country challenging Nvidia’s monopoly. However, it is the most determined and resourceful one. Due to U.S. sanctions, China’s own technological advancements and a vast domestic market, China has emerged as one of the few countries with significant independence and competitiveness in hardware, software, and communication technology. More importantly, these technology companies are not just owned by the state—they are driven by the ingenuity and efforts of the Chinese people.

As of October 2024, China boasts 1.1 billion internet users, accounting for 20% of the world’s total online population. When President Biden banned Chinese companies from purchasing the most advanced American chips and algorithms, it was the demand from these 1.1 billion users—who engage in activities such as watching short videos, gaming, and shopping online—that empowered Chinese tech companies to pursue self-reliance. According to official Chinese statistics, in 2023, the market size of China’s digital economy reached 53.9 trillion yuan.

Europe and the Global South undoubtedly have minds as brilliant as China’s. However, with Google and Meta leaving little room for competitors to emerge in these regions, they can never rely on the support of European and Global South users to get rid of Nvidia’s dominance. The free market is great, but if you only support it when it suits your needs, it might not work as it should.

The post China is the Last Hope to Stop NVIDIA’s Monopoly first appeared on China Academy.

]]>
https://thechinaacademy.org/china-is-the-last-hope-to-stop-nvidias-monopoly/feed/ 1 100033544
The Chip War Is Nearing Its End as China Chips In https://thechinaacademy.org/the-chip-war-is-nearing-its-end-as-china-chips-in/ https://thechinaacademy.org/the-chip-war-is-nearing-its-end-as-china-chips-in/#comments Tue, 10 Dec 2024 18:00:00 +0000 https://thechinaacademy.org/70-of-chips-to-be-made-in-china-as-exports-hit-a-trillion-yuan/ A Chinese telecommunications observer paints a stark picture for the U.S.

The post The Chip War Is Nearing Its End as China Chips In first appeared on China Academy.

]]>
According to data from China’s customs, between January and October 2024, the country exported chips worth 931.1 billion RMB, which is about $128 billion. Obviously, the figure is on track to exceed 1 trillion RMB, which is about $138 billion. For many, this figure is staggering, but its implications are even more profound.

The chips produced are first used domestically to meet internal demand, and only then are they available for export. China has long been the world’s largest semiconductor market, importing about $300 billion worth of chips annually. The fact that China is now poised to export over $138 billion worth of semiconductors suggests that its chip production has more than doubled this figure, allowing the country to meet internal demand.

According to Semiconductor Industry Association (SIA), global chip sales for 2023 stood at $526.8 billion. To put this in perspective, even if the global market grows to $600 billion in 2024, nearly half of the world’s chip would be produced in China. And at this pace, 70% of global chip production could come from China by the end of 2026.

Despite various challenges China’s chip industry faced between 2021 and 2023, the sector has entered a phase of rapid expansion, with annual growth exceeding 20%.
This momentum is unlikely to slow down. In fact, most of China’s chip factories began construction in 2021, with many completed by 2022 to early 2023, when they started trial production with small batches. In 2024, these facilities are gradually ramping up to full-scale production. Additionally, many newly built chip factories are still coming online in 2024. That means even more advanced facilities are expected to begin production in 2025.

A worker at a semiconductor fab in Binzhou, Shandong province, China.

Moreover, China’s chip market saw a 10% decline in chip imports in 2023, and it is expected that this figure could further decrease by 10–15% in 2024. Over time, the market is expected to reach a balance, with China’s chip exports growing at an annual rate of 20%, while chip imports decline by 20% annually.

Today, China’s chip manufacturing capabilities are unmatched globally. Most of its chip factories were constructed in the past three years, and they boast far superior design, management, and process standards compared to factories built years or even decades earlier. These new facilities also leverage higher levels of intelligent management, leading to greater efficiency, higher production output, and lower production costs.

China has already established a complete chip supply chain, including design software, production equipment, chip design, wafer manufacturing, chip packaging, testing, materials, and industrial gases.

Additionally, the competition among dozens of chip factories within China is driving prices lower and improving quality. This intense internal competition is a key factor behind the rapid growth of Chinese chip products in global markets.

By 2026, competition in chip manufacturing is expected to extend into advanced manufacturing technologies. By 2028, China is highly likely to account for roughly half of the world’s output in advanced chip production.

Despite no official announcement from the Chinese government or state media, the reality is that China has become the world’s largest chip manufacturer, holding half of the global chip production capacity.

Today, U.S. think tanks and media are beginning to reflect on the failure of America’s tech war against China. In the next three years, it is likely that the U.S. will start accusing China of chip overcapacity, alleging severe disruptions to the global chip market. At the same time, the U.S. government is likely to blame China for halting chip imports from America, while demanding that the Chinese government allow its firms to resume purchasing U.S. chips.

HR 4346, the Chips and Science Act of 2022, is displayed after it was signed by US President Joe Biden. The act aims to both strengthen American supply chain resilience and counter China.

By around 2028, the so-called “chip war” may come to an end, with China emerging as the dominant force in the global chip industry. In contrast, the prospects for the U.S., Japan, and South Korea seem far less promising.

The post The Chip War Is Nearing Its End as China Chips In first appeared on China Academy.

]]>
https://thechinaacademy.org/the-chip-war-is-nearing-its-end-as-china-chips-in/feed/ 4 100033467
Chinese Scientists Just Built A Computer Chip Out of Water https://thechinaacademy.org/chinese-scientists-just-built-a-computer-chip-out-of-water/ https://thechinaacademy.org/chinese-scientists-just-built-a-computer-chip-out-of-water/#respond Thu, 14 Nov 2024 18:00:00 +0000 https://thechinaacademy.org/chinese-scientists-just-built-a-computer-chip-out-of-water/ Its potential applications are ranging from more energy-efficient AI to advanced robotics.

The post Chinese Scientists Just Built A Computer Chip Out of Water first appeared on China Academy.

]]>
For decades, the story of computing has been inextricably linked to silicon.  Refined from humble sand, semiconductor has powered the digital revolution, its ability to control electron flow forming the bedrock of our modern world. But now, a radical new chapter is being written, one that challenges our very understanding of what constitutes a computer chip.  Chinese scientists from Zhejiang University have achieved the seemingly impossible: they’ve built a functional computer chip using water.

The novelty of this approach is breathtaking.  While silicon’s role in electronics is well-established, the idea of harnessing the properties of water for computation is revolutionary. This isn’t about water cooling; it’s about water being the computational medium.  

This innovation, detailed in a recent publication in the journal Device, leverages the unique properties of water molecules. Using light pulses, the chip exploits the rapid polarization changes of water molecules to generate a signal. This signal propagates through the water, decaying predictably in a natural process that eliminates the need for complex and energy-intensive circuitry found in traditional silicon chips.

To better understand it, think of water molecules as tiny magnets with a positive and negative end. These magnets can be flipped by light, and this flipping creates an electrical signal. The researchers discovered that this signal decays exponentially – like the fading sound of a bell – as it travels through water. This predictable decay is the key to the chip’s operation.

Imagine a line of dominoes (or magnetic dominoes, in this case). If you push the first one, it knocks down the next, and so on. The chip works similarly, but instead of dominoes, it’s water molecules flipping each other’s polarity. The light acts as the initial push, and the signal weakens as it travels down the line of molecules, just like the force of the dominoes diminishes as they fall. This natural signal decay eliminates the need for complex circuitry found in traditional computer chips, making it incredibly energy-efficient.

To simulate how the water molecules behave, the team used a model called the Ising model, often used to understand magnetism. This model helped them predict and understand the exponential decay of the signal. They also built a prototype chip that successfully identified the letters “ZJU,” the initials of Zhejiang University, demonstrating its functionality.

This unexpected breakthrough draws inspiration from the brain itself.  The liquid environment surrounding neurons, where electrochemical signals similarly propagate and decay, served as a biological model for this low-energy, high-performance computing paradigm. Although the brain demonstrates impressive computing power, the biggest advantage of human brain over semiconductor chips is the energy efficiency. While the brain, with its roughly 100 billion neurons and trillions of synapses, operates on a mere 20 watts of power, simulating its functionality digitally requires megawatts. This difference highlights a vast disparity in energy consumption per computational operation.
In fact, current AI systems, like those powering chatbots, consume vast amounts of energy – enough to power a small city in a single day. By contrast, this new water-based chip consumes a mere 10⁻¹⁸ joules per operation – that’s incredibly tiny! Think of the energy it takes to lift a grain of sand a fraction of an inch. This is orders of magnitude less energy than current state-of-the-art computer chips.

While still in its early stages, this water-based chip represents a significant leap forward in low-power computing. Its potential applications are vast, ranging from more energy-efficient AI to advanced robotics and other technologies that currently demand significant power. The use of water, a readily available and sustainable resource, adds another layer of appeal to this promising technology. The researchers are confident that further development will lead to even more powerful and efficient water-based computing systems.

The post Chinese Scientists Just Built A Computer Chip Out of Water first appeared on China Academy.

]]>
https://thechinaacademy.org/chinese-scientists-just-built-a-computer-chip-out-of-water/feed/ 0 100031511
To curb China, the US is killing the tech dreams of the Middle East https://thechinaacademy.org/to-curb-china-the-us-is-killing-the-tech-dreams-of-the-middle-east/ Sun, 29 Sep 2024 18:00:00 +0000 https://thechinaacademy.org/to-curb-china-the-us-is-killing-the-tech-dreams-of-the-middle-east/ Can the development of human technology break through the obstacles set by the US?

The post To curb China, the US is killing the tech dreams of the Middle East first appeared on China Academy.

]]>
Can the development of human technology break through the obstacles set by the US?

The post To curb China, the US is killing the tech dreams of the Middle East first appeared on China Academy.

]]>
100028125
Thanks to US, Huawei Is Ready to Take on NVIDIA? https://thechinaacademy.org/thanks-to-us-huawei-is-ready-to-take-on-nvidia/ Wed, 21 Aug 2024 18:00:00 +0000 https://thechinaacademy.org/thanks-to-us-huawei-is-ready-to-take-on-nvidia/ Why do tech giants like Nvidia struggle under US sanctions against China? Because complying with these restrictions means forgoing the vast Chinese market.

The post Thanks to US, Huawei Is Ready to Take on NVIDIA? first appeared on China Academy.

]]>
Why do tech giants like Nvidia struggle under US sanctions against China? Because complying with these restrictions means forgoing the vast Chinese market.

The post Thanks to US, Huawei Is Ready to Take on NVIDIA? first appeared on China Academy.

]]>
100025938
Bypassing the GPU Blockade: China’s Taichi II Microchip is Coming https://thechinaacademy.org/bypassing-the-gpu-blockade-chinas-taichi-ii-microchip-is-coming/ Sun, 11 Aug 2024 18:00:00 +0000 https://thechinaacademy.org/?p=100025322 From Taichi to Taichi II in just 4 months: China's breakneck pace in optical AI chip development.

The post Bypassing the GPU Blockade: China’s Taichi II Microchip is Coming first appeared on China Academy.

]]>
The US government has been tightening the export control of high-performance GPU as well as placing sanctions on Chinese tech firms, in an effort to slow down China’s progress in AI. However, these flawed and porous measures are doomed to fail.

Scientists from China’s Tsinghua University have developed a revolutionary new method for training optical neural networks, potentially paving the way for faster and more energy-efficient artificial intelligence systems that don’t rely on GPUs at all. The research, published in Nature on recently, introduces a technique called “fully forward mode” (FFM) learning, which allows for on-site training of optical systems without the need for complex computer simulations.

Optical computing, which uses light instead of electricity to process information, has long been touted as a promising alternative to traditional electronic computers. Much like how fiber optic cables can transmit data faster than copper wires, optical computers have the potential to perform calculations at much higher speeds while consuming less energy.

However, training these optical systems has been a significant challenge. Previously, researchers had to rely on computer simulations to design and optimize optical neural networks, similar to how architects might use computer models to design a building before construction. This approach was often inaccurate and inefficient, as the simulations couldn’t perfectly account for real-world imperfections in the optical systems.

In addition, in contrast to model reasoning (using AI models to perform tasks), model training demands substantial computational power. However, current optical neural network training heavily leans on GPUs for offline modeling and necessitates precise alignment of physical systems. As a result, the scale of optical training faces significant limitations, appearing as though the advantages of optical high-performance computing are confined by intangible constraints.

The new FFM learning method addresses this problem by allowing the optical system to learn and adjust itself in real-time, much like how a self-driving car might continuously adapt to road conditions. This on-site learning approach eliminates the need for complex simulations and allows the system to account for its own unique quirks and imperfections.

The Tsinghua researchers demonstrated the power of their new technique through several impressive experiments. First, they created a deep optical neural network capable of recognizing images with accuracy comparable to traditional computer-based systems. This is akin to teaching a camera to recognize objects as well as a human can, but using only optical components.

Subsequently, the team developed an imaging system that could see through scattering materials (like fog or frosted glass) with unprecedented clarity. This technology could potentially be used to improve visibility for self-driving cars in poor weather conditions or enhance medical imaging techniques.

Moreover, they also demonstrated an “all-optical” system capable of processing information using extremely low light levels – equivalent to less than one photon per pixel.

And finally, the researchers were able to design an optical microchip based on this FFM learning method, known as Taichi II, with the hope to replace GPU currently used and significantly improve the training efficiency of AI systems.

The implications of this research are far-reaching. Optical AI systems could potentially process information much faster and more efficiently than current electronic computers, leading to advancements in fields such as autonomous vehicles, medical diagnosis, and climate modeling.

The post Bypassing the GPU Blockade: China’s Taichi II Microchip is Coming first appeared on China Academy.

]]>
100025322
Chinese Scientists Created World’s Fastest Vision Chip for Autonomous Cars https://thechinaacademy.org/chinese-scientists-created-worlds-fastest-vision-chip-for-autonomous-cars/ Sun, 02 Jun 2024 18:00:00 +0000 https://thechinaacademy.org/chinese-scientists-created-worlds-fastest-vision-chip-for-autonomous-cars/ Safer self-driving cars: Tianmouc chip tackles tricky road conditions with 'human-eyes'.

The post Chinese Scientists Created World’s Fastest Vision Chip for Autonomous Cars first appeared on China Academy.

]]>
Our eyes are windows to an astonishing world, and the human visual system is the intricate machinery that makes it all possible.  With incredible precision and detail, it captures and interprets the world around us, seamlessly processing visual information through a complex network of neural pathways. This allows us to effortlessly recognize objects, navigate our surroundings, and experience the breathtaking beauty of the visual world in all its glory.

The current computer vision technology would pale in comparison. Imagine a self-driving car cruising down a sun-drenched highway, only to be thrown into a panic by a shimmering mirage on the asphalt. Or a robot, tasked with fetching a coffee cup, getting confused by a shadow cast across the table. These are the limitations of current computer vision and sensors – they’re still struggling to see the world with the same adaptability and understanding as a human. While they can process information with lightning speed and pinpoint accuracy, they’re easily fooled by simple tricks of light and shadow, struggling to grasp the nuances of a cluttered, dynamic world. Like a child trying to decipher a complex puzzle, they’re still learning to see the world with the same depth and intuition as their human counterparts.

The secret of human visual system lies in its two remarkable pathways that work in harmony to process visual information. These pathways, known as the ventral stream and the dorsal stream, play distinct roles in our perception and interaction with the world.

The ventral stream, often referred to as the “what” pathway, originates in the primary visual cortex (V1) and extends to the inferior temporal cortex. Its primary task is object recognition – the ability to identify and categorize what we see. This pathway meticulously dissects visual input into its fundamental components, such as color, shape, texture, and form. It then weaves these components together to form a coherent representation of objects and scenes, allowing us to make sense of the visual world around us.

On the other hand, we have the dorsal stream, also known as the “where/how” pathway. Like the ventral stream, it originates in V1, but it projects to the posterior parietal cortex. The dorsal stream is responsible for processing vital information about the location and motion of objects in our visual field. It serves as our guide for spatial awareness and actions. By utilizing visual primitives, the dorsal stream constructs representations of the relationships between objects and ourselves. This enables us to swiftly process visual information and, in turn, execute actions like reaching and grasping with remarkable precision.

In a nutshell, the ventral stream takes a “what” approach, deconstructing visual input into its essential features to facilitate object recognition. Conversely, the dorsal stream adopts a “where/how” strategy, utilizing spatial primitives to guide our actions based on the visual information received. Together, these two pathways work in tandem, allowing our visual system to not only recognize objects accurately but also respond swiftly to visual stimuli in our environment.

Now, the burning question remains: could artificial systems ever emulate nature’s crowning achievement and model computer vision after the human visual system’s remarkable prowess?

The scientists at China’s Tsinghua University are undoubtedly convinced of this tantalizing possibility. Featured on the cover of Nature, the prestigious scientific journal, their latest achievement marks the first ever vision chip with complementary pathways designed for open-world sensing. In another word, they succeed in replicating human visual system on a microchip that would make robots see the world as we do.

The chip, Tianmouc, literally means “sky’s eye”, employs an innovative hybrid pixel array, elegantly divided into “cone-type pixels” that emulate the cone cells in human eyes to capture color, and “rod-type pixels” that mimic the rod cells for rapid spatiotemporal perception. The entire pixel array is ingeniously back-illuminated, with light fibers entering from the rear, adeptly enhancing photon collection efficiency. While the photosensitive components of the “cone-type” and “rod-type” pixels are similar, comprising photodiodes and transfer gates, the “rod-type” pixels are uniquely integrated with multiple storage units to preserve temporal information at the pixel level, deftly priming for spatiotemporal computing.

Within the Tianmouc chip, two distinct pathways coexist – the cognitive pathway and the action pathway – their readout circuits subtly diverging. The cognitive pathway harnesses high-precision analog-to-digital converters to transform the signals harvested by the cone-type pixels into dense data matrices. In contrast, the action pathway adopts a multi-tier architecture, first reducing the spatiotemporal difference signals emanating from the rod-type pixels, then employing an adaptive quantization circuit to encode them into digital pulse sequences of specified bit-width, thereby adroitly curtailing the data payload.

Capitalizing on the inherent sparsity of the spatiotemporal difference data generated by the action pathway, the chip’s design incorporates an ingenious address-event representation mechanism: meticulously categorizing and packaging the data according to its temporal occurrence, pixel position, positive/negative polarity, and other attributes, to form compact data frames, further optimizing transmission bandwidth.

As a result, Tianmouc is able to quickly acquire visual information, at a rate of 10,000 frames per second with 10-bit precision and a high dynamic range of 130 dB. By comparison, Grove Vision AI V2, currently the most powerful computer vision chip, operates at a rate of 30.3 frames per second.

Tianmouc also reduces the amount of bandwidth needed by 90% and uses very little power. This allows it to overcome the limitations of traditional visual sensing methods and perform well in a variety of extreme scenarios, ensuring the stability and safety of the system.

When used in autonomous driving system, the chip is able to sense its surroundings very well in real-world situations. It has been tested in a variety of lighting conditions and complex scenarios, such as glare, tunnels, and the appearance of abnormal objects, and has performed well in all of them. The chip uses special algorithms to ensure that it can accurately sense the environment even when the lighting changes, and it can also suppress over- and under-exposure to provide stable imaging. Moreover, Tianmouc’s high-speed sensing allows it to quickly detect and track moving objects, and it can also recognize non-normal objects such as warning signs.

“The successful development of the Tianmouc represents a significant breakthrough in the field of visual sensing technology,” said Zhao Rong, a professor in the Department of Precision Instruments at Tsinghua University and a co-corresponding author of the paper. “This chip has the potential to revolutionize a wide range of applications, including autonomous driving, embodied intelligence, and more. We are excited to see the impact that it will have in the future.”

The post Chinese Scientists Created World’s Fastest Vision Chip for Autonomous Cars first appeared on China Academy.

]]>
100022406
US Threatens to Shut Down TSMC’s EUV if China Reunifies with Taiwan by Force https://thechinaacademy.org/us-threatens-to-shut-down-tsmcs-euv-if-china-reunifies-with-taiwan-by-force/ Thu, 23 May 2024 18:00:00 +0000 https://thechinaacademy.org/us-threatens-to-shut-down-tsmcs-euv-if-china-reunifies-with-taiwan-by-force/ This Is America's Last Resort for "Defeating" China?

The post US Threatens to Shut Down TSMC’s EUV if China Reunifies with Taiwan by Force first appeared on China Academy.

]]>
Those in Washington are truly worrying about the situation in Taiwan these days. However, among the few things they can actually do, apart from promoting arms sales that enrich themselves and the military-industrial complex, is issuing warnings or threats. But sometimes it’s hard to know who’s the one got threatened.

According to the latest report from Bloomberg, in the event of China’s reunification with Taiwan by force, the US can remotely disable all Chinese weapons! What a genius strategy!

The only problem is that’s not possible. So instead, with the urge of doing something (no matter what) in the event of Chinese military action, they choose to disable the EUVs for manufacturing advance microchips the world depends on.

Still a brilliant strategy, isn’t it? Imagine how big trouble it would cause to the Chinese semiconductor industry, which is already heavily sanctioned. It’s only natural to top it all with a push of the blue button (or a red button? If such a thing ever exists), and then all the EUVs in China suddenly become useless.

On the receiving end, China certainly should take this warning seriously, and start to inventory all the EUVs currently in its possession to estimate the potential loss. Good news, there’s no EUV ever been shipped to Chinese mainland. Bad news: the only place in the country where you can find these most coveted machines nowadays is in Taiwan.

Therefore, in the hypothetical scenarios, the US will pressure ASML, the manufacturer, to shut down EUV remotely from the Netherlands. Although in such circumstance, the US are more likely to order the machines to be destroyed. Why bother pressuring others to shut it down when you can just bomb it, especially considering that the US already has special forces and secret agents on the ground in Taiwan? After all, bombing is what a superpower does.

That is to say, in such a bizarre scenario, both China and the US would be bombing this small island. See? There is common ground between the two.

Another reason I would recommend destroying the machine, rather than just disabling it, is the crucial parts of EUV can be reused even after being shut down.  

As we know, the most crucial part of an EUV machine for microchip production is the optical system, specifically the projection optics and illumination system. These components play a vital role in directing and focusing the extreme ultraviolet (EUV) light onto the silicon wafer, allowing for the creation of intricate patterns that define the functionality of each microchip. The optical system, including mirrors that reflect and direct light with nanometer precision, is essential for achieving the high level of detail and precision required in lithography processes. Keeping the optical system intact will render the whole remotely disabling technique pointless.

Trying to deter or dissuade China from taking back Taiwan by threatening with EUV machines sounds ridiculous. For the US, it’s more of a damage control to deny China the extra benefits of reunification than a real strategy.

But what if China doesn’t need these EUVs by then? In fact, Chinese companies are taking a very different approach than their Western counterpart in building lithography machines.  According to reports, Huawei and China’s Semiconductor Manufacturing International Co. (SMIC) have submitted patents for a chip production method called self-aligned quadruple patterning (SAQP).  

The SAQP process enables the creation of microchip patterns that have dimensions smaller than what can be achieved using traditional photolithography techniques alone. It accomplishes this through a multilayer patterning approach that mimics the meticulous construction of an extremely detailed scale model. Specifically, it first deposits an initial layout of wider features and leaves spacing in between to guide subsequent steps. Rail-like structures are then formed on the sides of these features to designate where narrower patterns will be placed. The initial features are removed, leaving only the rail structures behind, upon which a thin layer of material is deposited to start forming mini-scaled features between the rails. Excess material is cleaned away to reduce these dimensions even further. The process repeats, adding more rail structures and then selectively etching material, to carve out increasingly finer lines and spaces that can be as narrow as one-third the width of a human hair. This allows integrated circuit manufacturers to print vastly smaller circuitry elements than standard photolithography, which is technically capable of achieving this in a single photochemical exposure. Using SAQP, HUAWEI and SMIC will produce 5nm-class process microchips, which will meet most of the demand.

With that being said, the more important message here for the US policymakers is to never look at the Taiwan issue through the lens of semiconductor production, as compared with China’s reunification, they are issues of two different levels of magnitude.  


Chinese communications expert Xiang Ligang believes remotely disabling lithography machines is technically feasible, but there are countermeasures against remote sabotage that could allow operations to restart. However, the shockwaves would still be catastrophic for companies and device makers dependent on the chips.
“TSMC is a civilian company, but the advanced chips it produces supply major U.S. tech giants like Qualcomm, Apple, Nvidia, AMD – a huge chunk of the American semiconductor industry.
If TSMC’s lithography machines were neutralized, there are two monumental issues. One, under what laws could ASML cripple a customer’s machines? That would decimate their commercial credibility. Who would ever import their equipment again? Their temporary edge would quickly erode as rivals caught up. It’s a self-destructive path.”
“If TSMC’s operations were paralyzed, wouldn’t that obliterate scores of U.S. companies? Wouldn’t the entire American chip industry simply collapse?”

The post US Threatens to Shut Down TSMC’s EUV if China Reunifies with Taiwan by Force first appeared on China Academy.

]]>
100022067
A Chinese Chip Sparks a Neuromorphic Computing Race https://thechinaacademy.org/a-chinese-chip-sparks-a-neuromorphic-computing-race/ Tue, 21 May 2024 18:00:00 +0000 https://thechinaacademy.org/a-chinese-chip-sparks-a-neuromorphic-computing-race/ Darwin3: The AI Chip That Learns Like a Brain, Works Like a Lightbulb

The post A Chinese Chip Sparks a Neuromorphic Computing Race first appeared on China Academy.

]]>
A typical computer chip, such as one found in a personal desktop for non-professional use, consumes around 100 watts of power. AI, on the other hand, requires significantly more energy. It is estimated that ChatGPT would consume approximately 300 watts per second to answer a single question. In contrast, the human brain is much more energy-efficient, requiring only around 10 watts of power, comparable to that of a lightbulb. This exceptional energy efficiency is one of the reasons why scientists are interested in modeling the next generation of microchips after the human brain.

In the bustling tech landscape of Hangzhou, China, a team of researchers at Zhejiang University has made a significant leap in the world of neuromorphic computing with the development of their latest innovation, the Darwin3 chip. This groundbreaking piece of technology promises to transform how we simulate brain activity, paving the way for advancements in artificial intelligence, robotics, and beyond.

Neuromorphic chips are designed to emulate the architecture and functioning of the human brain. Unlike traditional computers that process information in a linear, step-by-step manner, these chips operate more like our brains, processing multiple streams of information simultaneously and adapting to new data in real-time.

The Darwin3 chip is a marvel of modern engineering, specifically designed to work with Spiking Neural Networks (SNNs). SNNs are a type of artificial neural network that mimics the way neurons and synapses in the human brain communicate. While conventional neural networks use continuous signals to process information, SNNs use discrete spikes, much like the bursts of electrical impulses that our neurons emit.

Test environment. (a) The test chip and system board. (b) Application development process.

One of the standout features of Darwin3 is its flexibility in simulating various types of neurons. Just as an orchestra can produce a wide range of sounds by utilizing different instruments, Darwin3 can emulate different neuron models to suit a variety of tasks, from basic pattern recognition to complex decision-making processes.

To achieve this goal, Darwin3’s key innovations is its domain-specific instruction set architecture (ISA). This custom-designed set of instructions allows the chip to efficiently describe diverse neuron models and learning rules, including the integrate-and-fire (LIF) model, Izhikevich model, and Spike-Timing-Dependent Plasticity (STDP). This versatility enables Darwin3 to tackle a wide range of computational tasks, making it a highly adaptable tool for AI development.

Another significant breakthrough is Darwin3’s efficient memory usage. Neuromorphic computing faces the challenge of managing vast amounts of data involved in simulating neuronal connections. Darwin3 overcomes this hurdle with an innovative compression mechanism that dramatically reduces memory usage. Imagine shrinking a massive library of books into a single, compact e-reader without losing any content—this is akin to what Darwin3 achieves with synaptic connections.

Perhaps the most exciting feature of Darwin3 is its on-chip learning capability. This allows the chip to learn and adapt in real-time, much like how humans learn from experience. Darwin3 can modify its behavior based on new information, leading to smarter and more autonomous systems.

The implications of Darwin3’s technology are far-reaching and transformative. In healthcare, prosthetic limbs powered by Darwin3 could learn and adapt to a user’s movements, offering a more intuitive and natural experience. This could significantly enhance the quality of life for amputees.

In robotics, robots equipped with Darwin3 could navigate complex environments with greater ease and efficiency, similar to how humans learn to maneuver through crowded spaces. This capability could revolutionize industries from manufacturing to space exploration.

Environmental monitoring could also benefit from Darwin3. Smart sensors using Darwin3 could analyze environmental data in real-time, providing immediate insights into climate conditions and helping us better manage natural resources.

The Darwin3 chip represents a monumental step forward in neuromorphic computing, bringing us closer to creating machines that can think and learn in ways previously thought impossible. As this technology continues to evolve, we anticipate a future where intelligent systems seamlessly integrate into our daily lives, enhancing everything from medical care to environmental conservation. The research is recently published in the journal National Science Review.

The post A Chinese Chip Sparks a Neuromorphic Computing Race first appeared on China Academy.

]]>
100021979