Sign up for our newsletter!

Your data will be handled in compliance with our privacy policy.

Smoltek Semi & AI

Artificial intelligence (AI) is the collective term for technologies that enable computers to perform tasks that normally require human intelligence – such as recognizing images, understanding language, playing games or making decisions. It is based on algorithms, mathematical models and (now mainly) Machine Learning and Deep Learning, where neural networks are trained on large data sets.

Fredrik Liljeberg • December 17, 2025

Today, everyone is talking about AI. But what exactly is AI?

This arti­cle digs into the devel­op­ment of the “myth­i­cal” AI – which is on every­one’s lips today – from its begin­nings to the present day, and a lit­tle fur­ther into the near future. It also pro­vides an update on how it is pow­ered (the “AI chips”) – and what require­ments future capac­i­tors will need to meet to sup­port the ever-increas­ing pow­er loads that AI requires to function.

So, buck­le up. Here we go!

First, AI is not new – there have been ideas and hard­ware imple­men­ta­tions since the 1950s. But it is the com­bi­na­tion of deep learn­ing, big data and new chip archi­tec­tures that have explod­ed devel­op­ment over the past decade that has led to the heavy impact AI has had just in the last cou­ple of years. And the next 2–5 years will be about ener­gy effi­cien­cy, inte­gra­tion and spe­cial­iza­tion to run even larg­er mod­els – both in the cloud and at the edge.

The AI roadmap

AI has been around in electronics/​microelectronics for quite some time. 

  • 1950s–1970s: The first ideas about AI were for­mu­lat­ed (Tur­ing, Rosenblatt’s per­cep­tron). But the hard­ware was too slow.
  • 1980s–1990s: Wave of “expert sys­tems” and sim­pler ML algo­rithms. On the hard­ware side, CPUs and spe­cial­ized DSPs were used for sim­pler pat­tern recog­ni­tion, speech recog­ni­tion, and sig­nal processing.
  • 2000s: GPUs began to be used for par­al­lel com­put­ing, giv­ing AI a major boost (espe­cial­ly in image recognition).
  • 2010s: Deep learn­ing break­through (e.g. AlexNet 2012) – dri­ven by the com­bi­na­tion of large data sets + pow­er­ful GPUs + improved algo­rithms. This also saw the first ded­i­cat­ed AI chips (e.g. Google TPU 2016). 

AI development since the start 

  • From rules to learn­ing: Ear­ly AI sys­tems were rule-based (“if X then Y”). Mod­ern sys­tems learn them­selves from data.
  • From small to gigan­tic mod­els: Net­works with a few thou­sand para­me­ters in the 90s → today hun­dreds of bil­lions of para­me­ters (e.g. GPT‑4, LLa­MA, PaLM).
  • From niche to gen­er­al: AI is used today in smart­phones, cars, health­care, indus­try, lan­guage mod­els – everywhere. 

Drivers for today’s AI chip development 

  1. Expo­nen­tial­ly growth of data sets – AI mod­els need more data.
  2. New algo­rithms (deep learn­ing, trans­form­ers) – more com­pu­ta­tion­al­ly intensive.
  3. Lim­i­ta­tions in CPU/​GPU archi­tec­tures → need for spe­cial­ized hard­ware (TPU, NPU, FPGA, ASIC).
  4. Moore’s Law slows down → instead, they focus on het­ero­ge­neous chips, chiplets, 3D inte­gra­tion and ener­gy efficiency.
  5. Mar­ket dri­ving force – huge demand from com­pa­nies like Ope­nAI, Google, NVIDIA, Meta and others. 

What does the future (2–5 years ahead) look like for AI chips? 

  • Less ener­gy con­sump­tion per oper­a­tion: Ener­gy effi­cien­cy (TOPS/​Watt) becomes the biggest bottleneck.
  • Het­ero­ge­neous sys­tems: Com­bi­na­tion of CPU + GPU + NPU/​TPU on the same package.
  • Chiplets & advanced pack­ag­ing: 2.5D/3D inte­gra­tion, HBM (High Band­width Mem­o­ry) close to the com­pute cores.
  • Spe­cial­iza­tion: Dif­fer­ent chips for dif­fer­ent tasks (train­ing vs infer­ence, data cen­ter vs edge).
  • Mate­ri­als and new com­po­nents: Thin film capac­i­tors (like CNF-MIM), resis­tive RAM, pho­ton­ics for AI acceleration.
  • Edge AI: Small, ener­gy-effi­cient AI chips in mobiles, IoT and vehi­cles – not just in giant data centers.

Moore’s Law and the development of AI hardware

Transisto evolution 1950
Gen­er­al overview of proces­sor devel­op­ment 1950–2025.

1950–2000 (CPU era) 

  • Moore’s Law in full force: Num­ber of tran­sis­tors dou­bles every two years → expo­nen­tial improve­ment in com­put­ing power.
  • AI research lim­it­ed to aca­d­e­m­ic exper­i­ments (sym­bol­ic AI, perceptrons).
  • Hard­ware: CPUs (ser­i­al computing).

2000–2010 (GPU era) 

  • Moore’s Law con­tin­ues → but ener­gy con­sump­tion and clock fre­quen­cies reach limits.
  • Par­al­lelism (GPUs) becomes the solu­tion for deep learn­ing → trains e.g. AlexNet (2012).
  • The start of AI as prac­ti­cal­ly use­ful in image recog­ni­tion, lan­guage, games.

2010–present (TPU/​NPU era) 

  • Moore’s Law is start­ing to slow down: tran­sis­tor sizes <10 nm, fre­quen­cy increas­es are marginal.
  • Solu­tion: spe­cial­iza­tion → TPU (Google), NPU (Apple Neur­al Engine), AI-ASICs (Graph­core, Cerebras).
  • Focus on ener­gy effi­cien­cy per oper­a­tion (TOPS/​Watt) instead of just more transistors.
  • AI mod­els are explod­ing in size (GPT, BERT, etc.).

Future (next-gen AI chip, 2025–2030)

  • Moore’s Law is on the verge to reach its phys­i­cal lim­its (sub 2 nm).
  • Het­ero­ge­neous chips (CPU+GPU+NPU), chiplets and 3D integration.
  • Advanced mem­o­ries (HBM, MRAM, RRAM).
  • New mate­ri­als & com­po­nents (CNF-MIM, pho­ton­ics, neuromorphic).
  • Focus on spe­cial­ized archi­tec­ture for dif­fer­ent AI work­loads: train­ing vs. infer­ence, data cen­ter vs. edge.

From 1,000 transistors per chip in 1970 to over 50 billion transistors today.

Moore's Law Transistor Count 1970
50 years of CPU, tran­sis­tor count devel­op­ment. A log­a­rith­mic graph show­ing the time­line of how tran­sis­tor counts in microchips are almost dou­bling every two years from 1970 to 2020; also known as Moore’s law.
Source: https://commons.wikimedia.org/wiki/File:Moore%27s_Law_Transistor_Count_1970-2020.png

Moore’s Law states that the num­ber of tran­sis­tors will dou­ble every sec­ond year, roughly.This pre­dic­tion has cre­at­ed a glob­al need for high­ly advanced and high­ly minia­tur­ized capac­i­tors. Due to the enor­mous scal­ing of tran­sis­tors in advanced proces­sor chips, the need for decou­pling capac­i­tors mount­ed extreme­ly close to the proces­sor chip has explod­ed in recent years. 

How­ev­er, tra­di­tion­al capac­i­tor tech­nolo­gies have dif­fi­cul­ties deliv­er­ing high enough per­for­mance in a small enough form factor.

CNF-MIM to the rescue

Smoltek’s patent pro­tect­ed nan­otech­nol­o­gy enables next-gen­er­a­tion capac­i­tors with extreme­ly high elec­tri­cal per­for­mance in a very small form factor.

Smoltek’s capac­i­tors, called CNF-MIM (Car­bon Nanofiber Met­al-Insu­la­tor-Met­al), can be placed direct­ly adja­cent to the cir­cuits they are to sup­port – the key to the rapid­ly grow­ing mar­ket for decou­pling capac­i­tors for high-end proces­sors – such as AI, High-Per­for­mance Com­put­ing and edge devices.

CNF-MIM capac­i­tors meets the semi­con­duc­tor indus­try’s demands for next-gen­er­a­tion capac­i­tors with unique characteristics: 

  • Extreme­ly high capac­i­tance* per unit area
  • Extreme­ly low elec­tri­cal losses
  • Sev­er­al place­ment options clos­er to the proces­sor than com­pet­ing technologies
  • Ener­gy effi­cien­cy (low ESR/​ESL, low leakage)
  • Sta­bil­i­ty and life­time (break­down volt­age, reliability)

* Capac­i­tance is the abil­i­ty of a com­po­nent or cir­cuit to col­lect and store ener­gy in the form of an elec­tri­cal charge.

CNF MIM AI chip
Mod­el of a “CNF-MIM AI chip”.

The four key parameters for “AI capacitors” 

Capac­i­tors intend­ed for use in next-gen­er­a­tion AI chips and high-per­for­mance com­put­ing tech­nol­o­gy need to be able to han­dle the following: 

1. Low ESR (Equivalent Series Resistance) and ESL (Equivalent Series Inductance)

  • ESR and ESL must be kept extreme­ly low for the capac­i­tor to deliv­er sta­ble and fast cur­rent dis­si­pa­tion at high frequencies.
  • AI and HPC proces­sors oper­ate with bil­lions of tran­sis­tors that switch extreme­ly fast → they require clean and sta­ble sup­ply volt­age with­out interference.
  • Low ESR/​ESL means fast response, less heat gen­er­a­tion and high­er ener­gy efficiency.

2. Breakdown voltage & reliability 

  • The capac­i­tor must with­stand the volt­age lev­els used in advanced process nodes (often around 0.7–1.2 V).
  • An insuf­fi­cient­ly robust dielec­tric can lead to breakdown.
  • Reli­a­bil­i­ty (dura­bil­i­ty) is cru­cial for data cen­ters and AI train­ing, where 247 oper­a­tion for years is the norm.

3. High capacitance density

  • In today’s chips, sur­face area is extreme­ly valu­able – every square microm­e­ter must be uti­lized to its fullest.
  • A capac­i­tor with high capac­i­tance den­si­ty makes it pos­si­ble to build more decou­pling capac­i­tors close to the proces­sor cores with­out tak­ing up too much space.
  • This pro­vides local ener­gy stor­age that can deliv­er fast pow­er puls­es when mil­lions of cores are acti­vat­ed simul­ta­ne­ous­ly, which is cen­tral to AI mod­els that require mas­sive parallelism.

4. Relevance for AI and HPC 

  • AI chips (e.g. GPUs, TPUs) run matrix cal­cu­la­tions on a huge scale, with extreme­ly high and fast cur­rent variations.
  • HPC chips must run at high fre­quen­cy with max­i­mum stability.
  • In both cas­es, pow­er integri­ty (PI) becomes a bot­tle­neck – you have to be able to keep the volt­age sta­ble even though the cur­rent varies on a nanosec­ond scale.
  • Here, the thin form fac­tor and elec­tri­cal prop­er­ties of the CNF-MIM capac­i­tor are par­tic­u­lar­ly attrac­tive: they can be placed close to the tran­sis­tors in the chip instead of on the cir­cuit board, which reduces par­a­sitics and pro­vides faster response.

The three capacitor technologies in the AI race

Now, it’s fair to say that CNF-MIM is not the only tech­nol­o­gy for decou­pling and ener­gy stor­age in next-gen­er­a­tion advanced chips. It’s com­pet­ing with two oth­er big capac­i­tor tech­nolo­gies; Trench Sil­i­con Capac­i­tors, or Deep Trench Capac­i­tors (TSC/​DTC) and Mul­ti­lay­er Ceram­ic Capac­i­tors (MLCC). The three tech­nolo­gies are all used for decou­pling and ener­gy stor­age, but they dif­fer great­ly in design, per­for­mance and integration. 

Let’s take a look.

CNF MIM caps competition
CNF-MIM, TSC (or DTC) and MLCC. Illus­tra­tion of build­ing structure.

CNF-MIM (Carbon Nanofiber – Metal-Insulator-Metal) 

  • Struc­ture:
    • Based on ver­ti­cal car­bon nanofibers that pro­vide a 3D struc­ture and very large sur­face area in a small vol­ume. A sin­gle dielec­tric lay­er (one ALD step) is placed between the nanofibers and met­al contacts.
  • Ben­e­fits:
    • Extreme­ly thin, can be inte­grat­ed direct­ly into the chip or on an interposer.
    • Very high capac­i­tance den­si­ty per area thanks to the nanostructure.
    • Low ESR/​ESL because the place­ment can be very close to the tran­sis­tors → fast tran­sient response. 
  • Chal­lenges:
    • Rel­a­tive­ly new tech­nol­o­gy needs to prove long-term reli­a­bil­i­ty and indus­tri­al­iza­tion on a large scale.

TSC (Trench Silicon Capacitors) /​ DTC (Deep Trench Capacitors) 

  • Struc­ture:
    • Built by etch­ing “trench­es” (deep depres­sions) in sil­i­con and fill­ing them with dielec­tric and met­al in sev­er­al ALD steps. Uti­lizes sil­i­con as a substrate.
  • Ben­e­fits:
    • Can be inte­grat­ed into the semi­con­duc­tor process or into sil­i­con interposer.
    • Pro­vides rel­a­tive­ly high capac­i­tance den­si­ty, espe­cial­ly com­pared to pla­nar MIM capacitors.
    • Good reli­a­bil­i­ty and proven technology.
  • Chal­lenges:
    • Capac­i­tance den­si­ty is low­er than what CNF-MIM can achieve.
    • Process com­plex­i­ty (deep etch­ing) may lim­it cost-effectiveness.

MLCC (Multilayer Ceramic Capacitors) 

  • Struc­ture:
    • Con­sists of mul­ti­ple lay­ers of ceram­ic dielectrics and met­al lay­ers stacked in pack­ages. Mount­ed as dis­crete com­po­nents on print­ed cir­cuit boards (PCBs).
  • Ben­e­fits:
    • Cheap, mass-pro­duced and wide­ly avail­able technology.
    • High capac­i­tance in small vol­umes for dis­crete components. 
  • Dis­ad­van­tages:
    • Rel­a­tive­ly high ESR/​ESL due to place­ment fur­ther from the proces­sor (on the PCB, not in the chip).
    • Capac­i­tance varies strong­ly with tem­per­a­ture, volt­age and aging (espe­cial­ly with X7R/​X5R ceramics).
    • Dif­fi­cult to scale down to inte­gra­tion direct­ly into the chip.

Technological differences in summary

Char­ac­ter­is­ticCNF-MIMTSC /​ DTCMLCC
Place­mentDirect­ly under chip/​interposerIn silicon/​interposerOn PCB
Capac­i­tance densityVery high (nanos­truc­ture)High (trench structure)Mod­er­ate (bulk)
ESR/​ESLVery low LowRel­a­tive­ly high
Inte­gra­tionFuture tech­nol­o­gy for AI/​HPCAlready estab­lished in advanced packagingStan­dard com­po­nent, not chip-integrated
Matu­ri­tyIn Devel­op­mentVery mature and mass-producedVery mature and mass-produced
  • MLCCs are cheap but too “slow” and too far away from the proces­sor. They have served the ear­ly days of AI but is now on the verge to be obsolete.
  • TSC/​DTC is a proven solu­tion for inte­grat­ed capac­i­tance but lim­it­ed in den­si­ty. Serves today’s AI development.
  • CNF-MIM capac­i­tors have the poten­tial to pro­vide the next leap in capac­i­tance den­si­ty and inte­gra­tion, mak­ing them par­tic­u­lar­ly attrac­tive for AI and HPC chips where every nanosec­ond and every square mil­lime­ter counts.

In con­clu­sion, by pio­neer­ing car­bon nan­otech­nol­o­gy Smoltek has a tru­ly dis­rup­tive capac­i­tor tech­nol­o­gy that enables the com­bi­na­tion of extreme­ly high elec­tri­cal per­for­mance in a high­ly minia­tur­ized prod­uct. Smoltek Semi­’s CNF-MIM capac­i­tor address­es key chal­lenges for next-gen­er­a­tion AI, high-per­for­mance com­put­ing and edge devices.

What does AI have to say about this? 

Okay, so this has been a deep dive into the devel­op­ment of AI and what is required of the capac­i­tors to keep this, not-so-new, thing rolling.

But the first ques­tion – what is AI wasn’t real­ly answered, was it? That might take some time to explain though. Might as well ask the AI about it.

“Hello, what is AI?” 

“Thanks for ask­ing. AI, or Arti­fi­cial Intel­li­gence, refers to com­put­er sys­tems designed to per­form tasks that typ­i­cal­ly require human intel­li­gence, such as learn­ing, prob­lem-solv­ing, deci­sion-mak­ing, and lan­guage under­stand­ing. It uses algo­rithms to ana­lyze data, iden­ti­fy pat­terns, and make deci­sions to achieve spe­cif­ic goals, with com­mon exam­ples includ­ing vir­tu­al assis­tants, rec­om­men­da­tion sys­tems, and self-dri­ving cars.”

“Alright, and what is HPC, or high-performance computing?” 

“High-per­for­mance com­put­ing (HPC) uses aggre­gat­ed com­put­ing pow­er, such as super­com­put­ers or clus­ters of com­put­ers, to solve large, com­plex prob­lems in sci­ence, engi­neer­ing, and busi­ness that would be impos­si­ble on a typ­i­cal com­put­er. HPC sys­tems process mas­sive datasets and per­form cal­cu­la­tions at extreme­ly high speeds, enabling com­plex mod­el­ing, data analy­sis, and arti­fi­cial intel­li­gence appli­ca­tions that dri­ve inno­va­tion and pro­vide sig­nif­i­cant com­pet­i­tive advantages.”

“And also, what are edge devices?” 

“Edge devices are hard­ware com­po­nents, like routers, sen­sors, smart­phones, and gate­ways, that con­nect to a core net­work and process data at or near its source, rather than send­ing it to a cen­tral­ized cloud or data cen­ter for analy­sis. These devices are essen­tial for edge com­put­ing, enabling real-time data pro­cess­ing, reduc­ing laten­cy, and improv­ing net­work effi­cien­cy in appli­ca­tions rang­ing from the Inter­net of Things to indus­tri­al automa­tion and smart cities.”

So, there you have it. An overview of AI devel­op­ments and how Smoltek’s CNF-MIM capac­i­tor tech­nol­o­gy has the poten­tial to become an impor­tant pil­lar of ener­gy-effi­cient pow­er man­age­ment in future chips.


Want to learn more about the extreme­ly small and ultra-thin CNF-MIM capac­i­tor? Go here!

Sign up for our newsletter!

Your data will be handled in compliance with our privacy policy.

Related news and insights