Artificial intelligence (AI) is the collective term for technologies that enable computers to perform tasks that normally require human intelligence – such as recognizing images, understanding language, playing games or making decisions. It is based on algorithms, mathematical models and (now mainly) Machine Learning and Deep Learning, where neural networks are trained on large data sets.
Fredrik Liljeberg • December 17, 2025
Today, everyone is talking about AI. But what exactly is AI?
This article digs into the development of the “mythical” AI – which is on everyone’s lips today – from its beginnings to the present day, and a little further into the near future. It also provides an update on how it is powered (the “AI chips”) – and what requirements future capacitors will need to meet to support the ever-increasing power loads that AI requires to function.
So, buckle up. Here we go!
First, AI is not new – there have been ideas and hardware implementations since the 1950s. But it is the combination of deep learning, big data and new chip architectures that have exploded development over the past decade that has led to the heavy impact AI has had just in the last couple of years. And the next 2–5 years will be about energy efficiency, integration and specialization to run even larger models – both in the cloud and at the edge.
The AI roadmap
AI has been around in electronics/microelectronics for quite some time.
1950s–1970s: The first ideas about AI were formulated (Turing, Rosenblatt’s perceptron). But the hardware was too slow.
1980s–1990s: Wave of “expert systems” and simpler ML algorithms. On the hardware side, CPUs and specialized DSPs were used for simpler pattern recognition, speech recognition, and signal processing.
2000s: GPUs began to be used for parallel computing, giving AI a major boost (especially in image recognition).
2010s: Deep learning breakthrough (e.g. AlexNet 2012) – driven by the combination of large data sets + powerful GPUs + improved algorithms. This also saw the first dedicated AI chips (e.g. Google TPU 2016).
AI development since the start
From rules to learning: Early AI systems were rule-based (“if X then Y”). Modern systems learn themselves from data.
From small to gigantic models: Networks with a few thousand parameters in the 90s → today hundreds of billions of parameters (e.g. GPT‑4, LLaMA, PaLM).
From niche to general: AI is used today in smartphones, cars, healthcare, industry, language models – everywhere.
Drivers for today’s AI chip development
Exponentially growth of data sets – AI models need more data.
New algorithms (deep learning, transformers) – more computationally intensive.
Limitations in CPU/GPU architectures → need for specialized hardware (TPU, NPU, FPGA, ASIC).
Moore’s Law slows down → instead, they focus on heterogeneous chips, chiplets, 3D integration and energy efficiency.
Market driving force – huge demand from companies like OpenAI, Google, NVIDIA, Meta and others.
What does the future (2–5 years ahead) look like for AI chips?
Less energy consumption per operation: Energy efficiency (TOPS/Watt) becomes the biggest bottleneck.
Heterogeneous systems: Combination of CPU + GPU + NPU/TPU on the same package.
Chiplets & advanced packaging: 2.5D/3D integration, HBM (High Bandwidth Memory) close to the compute cores.
Specialization: Different chips for different tasks (training vs inference, data center vs edge).
Materials and new components: Thin film capacitors (like CNF-MIM), resistive RAM, photonics for AI acceleration.
Edge AI: Small, energy-efficient AI chips in mobiles, IoT and vehicles – not just in giant data centers.
Moore’s Law and the development of AI hardware
General overview of processor development 1950–2025.
1950–2000 (CPU era)
Moore’s Law in full force: Number of transistors doubles every two years → exponential improvement in computing power.
AI research limited to academic experiments (symbolic AI, perceptrons).
Hardware: CPUs (serial computing).
2000–2010 (GPU era)
Moore’s Law continues → but energy consumption and clock frequencies reach limits.
Parallelism (GPUs) becomes the solution for deep learning → trains e.g. AlexNet (2012).
The start of AI as practically useful in image recognition, language, games.
2010–present (TPU/NPU era)
Moore’s Law is starting to slow down: transistor sizes <10 nm, frequency increases are marginal.
Focus on energy efficiency per operation (TOPS/Watt) instead of just more transistors.
AI models are exploding in size (GPT, BERT, etc.).
Future (next-gen AI chip, 2025–2030)
Moore’s Law is on the verge to reach its physical limits (sub 2 nm).
Heterogeneous chips (CPU+GPU+NPU), chiplets and 3D integration.
Advanced memories (HBM, MRAM, RRAM).
New materials & components (CNF-MIM, photonics, neuromorphic).
Focus on specialized architecture for different AI workloads: training vs. inference, data center vs. edge.
From 1,000 transistors per chip in 1970 to over 50 billion transistors today.
50 years of CPU, transistor count development. A logarithmic graph showing the timeline of how transistor counts in microchips are almost doubling every two years from 1970 to 2020; also known as Moore’s law. Source: https://commons.wikimedia.org/wiki/File:Moore%27s_Law_Transistor_Count_1970-2020.png
Moore’s Law states that the number of transistors will double every second year, roughly.This prediction has created a global need for highly advanced and highly miniaturized capacitors. Due to the enormous scaling of transistors in advanced processor chips, the need for decoupling capacitors mounted extremely close to the processor chip has exploded in recent years.
However, traditional capacitor technologies have difficulties delivering high enough performance in a small enough form factor.
CNF-MIM to the rescue
Smoltek’s patent protected nanotechnology enables next-generation capacitors with extremely high electrical performance in a very small form factor.
Smoltek’s capacitors, called CNF-MIM (Carbon Nanofiber Metal-Insulator-Metal), can be placed directly adjacent to the circuits they are to support – the key to the rapidly growing market for decoupling capacitors for high-end processors – such as AI, High-Performance Computing and edge devices.
CNF-MIM capacitors meets the semiconductor industry’s demands for next-generation capacitors with unique characteristics:
Extremely high capacitance* per unit area
Extremely low electrical losses
Several placement options closer to the processor than competing technologies
Energy efficiency (low ESR/ESL, low leakage)
Stability and lifetime (breakdown voltage, reliability)
* Capacitance is the ability of a component or circuit to collect and store energy in the form of an electrical charge.
Model of a “CNF-MIM AI chip”.
The four key parameters for “AI capacitors”
Capacitors intended for use in next-generation AI chips and high-performance computing technology need to be able to handle the following:
ESR and ESL must be kept extremely low for the capacitor to deliver stable and fast current dissipation at high frequencies.
AI and HPC processors operate with billions of transistors that switch extremely fast → they require clean and stable supply voltage without interference.
Low ESR/ESL means fast response, less heat generation and higher energy efficiency.
2. Breakdown voltage & reliability
The capacitor must withstand the voltage levels used in advanced process nodes (often around 0.7–1.2 V).
An insufficiently robust dielectric can lead to breakdown.
Reliability (durability) is crucial for data centers and AI training, where 24⁄7 operation for years is the norm.
3. High capacitance density
In today’s chips, surface area is extremely valuable – every square micrometer must be utilized to its fullest.
A capacitor with high capacitance density makes it possible to build more decoupling capacitors close to the processor cores without taking up too much space.
This provides local energy storage that can deliver fast power pulses when millions of cores are activated simultaneously, which is central to AI models that require massive parallelism.
4. Relevance for AI and HPC
AI chips (e.g. GPUs, TPUs) run matrix calculations on a huge scale, with extremely high and fast current variations.
HPC chips must run at high frequency with maximum stability.
In both cases, power integrity (PI) becomes a bottleneck – you have to be able to keep the voltage stable even though the current varies on a nanosecond scale.
Here, the thin form factor and electrical properties of the CNF-MIM capacitor are particularly attractive: they can be placed close to the transistors in the chip instead of on the circuit board, which reduces parasitics and provides faster response.
The three capacitor technologies in the AI race
Now, it’s fair to say that CNF-MIM is not the only technology for decoupling and energy storage in next-generation advanced chips. It’s competing with two other big capacitor technologies; Trench Silicon Capacitors, or Deep Trench Capacitors (TSC/DTC) and Multilayer Ceramic Capacitors (MLCC). The three technologies are all used for decoupling and energy storage, but they differ greatly in design, performance and integration.
Let’s take a look.
CNF-MIM, TSC (or DTC) and MLCC. Illustration of building structure.
Based on vertical carbon nanofibers that provide a 3D structure and very large surface area in a small volume. A single dielectric layer (one ALD step) is placed between the nanofibers and metal contacts.
Benefits:
Extremely thin, can be integrated directly into the chip or on an interposer.
Very high capacitance density per area thanks to the nanostructure.
Low ESR/ESL because the placement can be very close to the transistors → fast transient response.
Challenges:
Relatively new technology needs to prove long-term reliability and industrialization on a large scale.
Built by etching “trenches” (deep depressions) in silicon and filling them with dielectric and metal in several ALD steps. Utilizes silicon as a substrate.
Benefits:
Can be integrated into the semiconductor process or into silicon interposer.
Provides relatively high capacitance density, especially compared to planar MIM capacitors.
Good reliability and proven technology.
Challenges:
Capacitance density is lower than what CNF-MIM can achieve.
Process complexity (deep etching) may limit cost-effectiveness.
MLCC (Multilayer Ceramic Capacitors)
Structure:
Consists of multiple layers of ceramic dielectrics and metal layers stacked in packages. Mounted as discrete components on printed circuit boards (PCBs).
Benefits:
Cheap, mass-produced and widely available technology.
High capacitance in small volumes for discrete components.
Disadvantages:
Relatively high ESR/ESL due to placement further from the processor (on the PCB, not in the chip).
Capacitance varies strongly with temperature, voltage and aging (especially with X7R/X5R ceramics).
Difficult to scale down to integration directly into the chip.
Technological differences in summary
Characteristic
CNF-MIM
TSC / DTC
MLCC
Placement
Directly under chip/interposer
In silicon/interposer
On PCB
Capacitance density
Very high (nanostructure)
High (trench structure)
Moderate (bulk)
ESR/ESL
Very low
Low
Relatively high
Integration
Future technology for AI/HPC
Already established in advanced packaging
Standard component, not chip-integrated
Maturity
In Development
Very mature and mass-produced
Very mature and mass-produced
MLCCs are cheap but too “slow” and too far away from the processor. They have served the early days of AI but is now on the verge to be obsolete.
TSC/DTC is a proven solution for integrated capacitance but limited in density. Serves today’s AI development.
CNF-MIM capacitors have the potential to provide the next leap in capacitance density and integration, making them particularly attractive for AI and HPC chips where every nanosecond and every square millimeter counts.
In conclusion, by pioneering carbon nanotechnology Smoltek has a truly disruptive capacitor technology that enables the combination of extremely high electrical performance in a highly miniaturized product. Smoltek Semi’s CNF-MIM capacitor addresses key challenges for next-generation AI, high-performance computing and edge devices.
What does AI have to say about this?
Okay, so this has been a deep dive into the development of AI and what is required of the capacitors to keep this, not-so-new, thing rolling.
But the first question – what is AI wasn’t really answered, was it? That might take some time to explain though. Might as well ask the AI about it.
“Hello, what is AI?”
“Thanks for asking. AI, or Artificial Intelligence, refers to computer systems designed to perform tasks that typically require human intelligence, such as learning, problem-solving, decision-making, and language understanding. It uses algorithms to analyze data, identify patterns, and make decisions to achieve specific goals, with common examples including virtual assistants, recommendation systems, and self-driving cars.”
“Alright, and what is HPC, or high-performance computing?”
“High-performance computing (HPC) uses aggregated computing power, such as supercomputers or clusters of computers, to solve large, complex problems in science, engineering, and business that would be impossible on a typical computer. HPC systems process massive datasets and perform calculations at extremely high speeds, enabling complex modeling, data analysis, and artificial intelligence applications that drive innovation and provide significant competitive advantages.”
“And also, what are edge devices?”
“Edge devices are hardware components, like routers, sensors, smartphones, and gateways, that connect to a core network and process data at or near its source, rather than sending it to a centralized cloud or data center for analysis. These devices are essential for edge computing, enabling real-time data processing, reducing latency, and improving network efficiency in applications ranging from the Internet of Things to industrial automation and smart cities.”
So, there you have it. An overview of AI developments and how Smoltek’s CNF-MIM capacitor technology has the potential to become an important pillar of energy-efficient power management in future chips.
Want to learn more about the extremely small and ultra-thin CNF-MIM capacitor? Go here!
Sign up for our newsletter!
Your data will be handled in compliance with our privacy policy.
Modern electronics demand increasingly compact and powerful capacitors, and the race for miniaturization has reached a critical juncture. While both Deep Trench Capacitors (DTC) and Carbon Nanofiber Metal-Insulator-Metal (CNF-MIM) capacitors are being developed for advanced applications requiring smaller size and higher capacitance density than conventional surface-mounted capacitors, DTC pushes the boundaries of what’s achievable with subtractive manufacturing, but is hitting fundamental physical limits. In contrast, Smoltek’s innovative CNF-MIM technology offers a revolutionary path forward, unrestricted by these constraints.
Smoltek Semi has developed a new material for use in CNF-MIM capacitors. The new material increases capacitance by more than three times and reduces leakage current by 50 percent. In this article, we examine what this means and why it is good for shareholders and investors.
Our innovative “zapping” method drastically reduces development time and costs, enabling us to advance CNF-MIM technology faster and making Smoltek Semi’s technology even more attractive to potential buyers. Read on to see how this new process strengthens our position and shortens our path to market.
Smoltek Semi joins an elite club of companies achieving 1 µF/mm² capacitance density, but stands alone in reaching this milestone with an ultra-thin profile. This breakthrough unlocks the under-chip real estate that represents the holy grail of capacitor placement in modern electronics.
The recently signed technical service agreement between Smoltek Semi and the Industrial Technology Research Institute (ITRI) of Taiwan changes how CNF-MIM capacitors will reach the market. This partnership delivers the validation and manufacturing capabilities needed to transform laboratory breakthroughs into commercial reality.
Smoltek Semi has initiated the signing of a technical service agreement with the Taiwanese Industrial Technology Research Institute (ITRI) that enables low-volume production of Smoltek's propriety CNF-MIM capacitors.