This is the critical difference between the IT-era and the Internet revolution: the first made existing companies more efficient, the second, primarily by making distribution free, destroyed those same companies’ business models.
The important insight is that in a first phase the widespread use of IT increased productivity and earnings, by streamlining production. But only when IT and Internet streamlined distribution, incumbent companies crumbled down.
Many markets have been completely destroyed and re-built (on a smaller scale) by Internet giants and newcomers.
The newspaper business is a shadow of what it used to be. So are travel agencies and directory companies. Their business model, i.e. the capability to get paid, was enabled by owning distribution (not the mithical means of production).
What will the next step be? My hunch is that it could be “accreditation”. Then it is going to be hard for the Education and Health sectors.
Again, this is a good opportunity for existing Universities and Schools, for Health Companies and Hospitals, to move first.
But it also a good opportunity for newspapers to play a new game.
I always insist with my students that the smartphone is the reference electronic system. Smartphone fabrication requires more integrated circuits than any other market segment in the Electronic Systems Industry. More than PCs, much more than automotive applications.
The data below from recent TSMC’s third quarter presentation come handy. TSMC – Taiwan Semiconductor Manufacturing Corporation – is not a household name, but is the largest semiconductor foundry, i.e. semiconductor manufacturer for third parties. For example, TSMC is the manufacturer of the A10 processor of the iPhone 7, and of many Snapdragon processors of high-end Android phones.
60% of TMSC’s revenue in 3Q come from “Communication” applications, which means smartphones.
In terms of a breakdown per technology type, 31% of revenue is from the 16/20 nm CMOS process, which is mainly use for high performance smartphone processors. The steep increase in 3Q 2016 revenue from 16/20 nm technology one can see in the rightmost bar in the figure below is most likely due the production of the A10 processor of the iPhone 7.
No sign of Moore’s law slowing down from these data, by the way.
The Nanohub has recently posted this a video of a presentation I gave last September during the International Workshop on Computational Electronics, at Purdue University in West Lafayette, Indiana. It is some of our most recent work on electronics based on two-dimensional materials.
I really like the software platform, with video and audio, slides, and the table of contents in the same page. There’s a lot of great educational and academic content on the Nanohub, for the curious and the professional.
Click here to see the presentation in html format (video and syncronized high definition slides), or one the youtube video below.
This is the transcript of a talk I gave in Siena on 26 June 2015, for the annual meeting of the Italian association of scholars and researchers in Electronics (“Gruppo Italiano Elettronica”).
Semiconductor electronics was shaped in a form very close to the present one between the 30s and the 60s, as a distinct discipline with respect to solid-state physics and vacuum-tube electronics. During those developments it became clear that some electrical engineers had to be trained as scientists, and the first PhD program for engineers was started at MIT in 1952 after the initiative of Gordon Brown, then Head of the Department of Electrical Engineering.
Already in 1932, in a paper on Zeitschift fur Physik on the theory of metal-semiconductor diodes, we see a illustration of the principle of operation using the now common band-edge profiles.
And this is a photo of Shockley from 1950, where we can clearly see the band-edge profiles of a bipolar junction transistor. This graphic way of describing the operation of semiconductor devices became common in the 30s. Basically, if you take a modern textbook or some modern papers, you find a very similar way of describing transistor operation.
Why semiconductor electronics has be so successful for such a long time? Let’s look at what happened. Bardeen, Brattain and Shockley invented the first transistor in 1948. They actually were trying to obtain a field-effect transistor, but it worked out differently, and they obtained the point-contact transistors.
The first transistor made of germanium one, then silicon became the material of choice, right after the invention of the integrated circuit by Kilby and Noyce. Silicon oxide was a very good dielectric and could be grown on silicon.
The integrated circuit allow the industry to double every 1–2 years the number of transistors on a single chip. And it is still working now, at a few billion transistors per chip. This exponential behaviour came to be known as Moore’s law, after a prediction that Gordon Moore made in 1965, with only few data points, exactly 50 years ago.
Actually, after moving from Bipolar devices to NMOS and to CMOS, the track to scaling was well defined. I do not want to discard the huge investment in technology that was required, but from the device point of view, little changed in terms of materials and geometries except for scaling.
But in the 80s already someone started to see that scaling was becoming harder, and that therefore some intrinsic limitations were present. Some alternatives were proposed: one very interesting at the time was the so-called bandgap engineering, which was proposed among others by Federico Capasso, then at Bell Labs. The core of the proposal was to exploit the newly available growth techniques to fabricated heterostructures and superlattices in order to adjust the bandstructures to optimize existing devices or create new device concepts.Actually it did not go this way and it never made into the mainstream in this form. The semiconductor industry found new ways to proceed with the scaling down, leaving the device structure unaltered.
But then, something happened 12 years ago. Scaling continued, but strong innovations had to be introduced in terms of materials and structure.
First, in 2003, strained silicon. We stretch the silicon crystal to modify the bandstructure, to adjust the energy of conduction band minima and valence band maxima, and to modify the effective mass, in order to boost mobility. Tensile strain is needed for PMOS, compressive strain for NMOS. You can see, this is essentially a type of bandstructure engineering, which actually became mainstream, in a way different from what initially proposed.
Then in 2007, High-K metal gate process, another type of bandgap engineering. A gate stack with insulator with high dielectric constant and smaller gap, that would allow to use a thicker layer to suppress gate leakage current while maintaining good electrostatic control on the channel.
Finally, in 2011, the trigate process, i.e. no more planar transistors but a three dimensional device, to improve the electrostatic control of the channel.
In the end we really do not recognize a transistor anymore. What’s this?
You see, the image of a 22 nm transistor is closer to a molar than to a 130 nm transistor. It is definitely not your dad’s transistor.
What should we expect now?
Simply, more of the same:
moreinnovation in materials, for example III-V semiconductors, Germanium, or other 2D materials for the transistor channel
more innovation in structures, for example the use of 3D structures, which is now a reality in non-volatile memories.
more physical mechanisms, adding to transport and electrostatics also mechanics, microfluidics, optics, magnetics, piezo, thermoelectrics. For example microelectromechanical systems are already a 12 B$ global business.
Now it should be clear what is The New Semiconductor Electronics.
We had a semiconductor electronics with few materials, mainly the silicon-silicon oxide system, planar devices, and only electronics on a silicon chip.
The New Semiconductor Electronics uses a wealth of materials, geometries, and much more physical mechanisms on a silicon platform
Of course it is a huge intellectual adventure, because we need to change skin a bit, and to learn lots of new things.
Are we ready for this?
I don’t know. In the 60s, when Semiconductor Electronics was established as a distinct discipline, the Semiconductor Electronics Education Committee was established to prepare a set of six paperback textbooks to teach the subject in an adequate way.
I am trying to buy all those books from abebooks. If you read them you would notice that they are very similar to the books we use today, 50 years later.
We need some effort in renewing academic education and research in the field of semiconductor electronics.
Things have not only become more complicate in the new semiconductor electronics. Indeed, other things are simpler at the nanoscale!
For example, in traditional semiconductor electronics we look at bipolar junction transistors and at field-emission transistors as different devices, the former dominated by diffusion currents, and the second by drift currents.
However, for small channel length, if we look at the band edge profiles of the two devices, as shown in the figure below, we clearly see that they are identical, and that both devices are described by the same physical mechanism: thermionic emission over a tunable barrier.
As far as noise is concerned, we learn from traditional device electronics book that shot noise describes the current noise in a bipolar transistor, and that “corrected” thermal noise describes current noise in a field-effect transistor. However, it is well known that as channel length is decreases the latter correction becomes larger and larger, in order to enable the model to reproduce reality.
The fact is that in nanoscale FETs the operating mechanism is similar as that of a bipolar transistor, and therefore “suppressed” shot noise is the proper noise mechanism also describing an FET. The “suppression” is due to the proximity of the gate contact.
This concept has been clearly expressed more than 10 years ago, and finally in recent years it has been demonstrated in experiments on 10-nm long FETs.
As a third example, let me show this figure from Willy Sansen’s keynote at ISSCC in 2015. For analog circuit design in aggressively scaled down CMOS processes, below 20 nm: subthreshold operation provides the best figures of merit in terms of performance at power parity (the corresponding figure of merit is the cutoff frequency times the transconductance divided by the bias current).
This is very interesting, because in traditional undergraduate education the subthreshold operation of field-effect transistor is often not considered. In sub threshold operation, currents depend exponentially on bias voltages, as in bipolar transistors.
I would like now to give some examples of how we address the themes of the new semiconductor electronincs.
In the last decade, two dimensional materials have attracted incredible interest for applications in electronics. It all started when the electrical properties of graphene where discovered and characterized in 2004 in Manchester. Graphene is just one-atom thick, therefore is an ideal two-dimensional material, and can have a very high mobility at room temperature, close to 10,000 cm2/Vs when deposited on a substrate. However, it also have a zero energy gap, which represents a severe obstacle to its use in electronics.
After graphene, other two-dimensional materials have received enormous attention: among them boron nitride, the family of transition metal dichalcogenides, bismuth selenide and bismuth telluride, and others.
They are also very thin, generally have a medium-to-low mobility and have an energy gap from 0.1 eV to 5 eV, with the usual tradeoff between mobility and energy gap.
If we look at performance figures, such as the delay time and the dynamic power indicator, they are in the optimistic case in line with the evolution of the International Tecnology Roadmap for Semiconductor, since device modeling on defect less device structures predict comparable performance to the expectations at the end of roadmap horizon (2016).
In 2012, a “Materials on demand” Paradigm has been proposed, i.e., the possibility of obtaining 3D materials with taylored properties by stacking several layers of 2D materials coupled by Var der Waals forces.
You can probably recognize the similarity with the paradigm of “Bandgap Engineering” of the 80s that I have shown before. Of course, things are not identical: in the 80s they were considering III-V heterostructures, consisting of layers with a thickness of few nanometers, almost lattice-matched; in this case we are dealing with single atom layers, often with completely different lattice, and we are playing with a larger number of atomic species. Of course, history does not exactly repeat itself, but it rhymes?—?as Mark Twain famously said.
Let me say how we address these problems. Our specialty is the early assessment of device potential via modelling. In order to do that, we use our in house simulation tool, Nanotcad Vides, that now has 15 years of development. It started with a European project that I coordinated in the 5th Framework Programme from 2000 to 2003, and now has atomistic simulation capabilities of 3D devices coupling transport and electrostatics. Now the lead developer is my colleague Gianluca Fiori, we have made freely available the source code and full documentation, to let everybody use it. As of today, a few groups are using this code, also independently of us. We maintain a known list of publications using the code.
With the code we can compute transport properties of silicon and carbon based devices, using non-equilibrium Green’s functions with a tight-binding Hamiltonian, or an effective mass approach. To put it simply, we build the device or key building blocks atom by atom, and then simulate the operation of a complete device.
Our approach is to use our modeling tools, other tools, and analytical modeling to evaluate the feasibility and the possible performance of a device structure (assuming that fabrication problems will be solved). Here a mix of physicist and engineer attitude is really important. We look at new effects for opportunities, we are optimistic but skeptic, and we benchmark with existing technology and its foreseeable evolution, as for example predicted by the International Technology Roadmap for Semiconductors.
Essentially, we use a “Via Negativa” approach: we try to rapidly filter out device structures and operating principle that are not promising, and save the very few promising ones for further investigation.
Via negativa: “This won’t work. This neither. Try instead this.”
In the modern scientific PR-conscious world where hype is the norm, saying that some things do not work is not the easy way to become popular.
I need to add few more words on our methodology. We need to consider many different materials, and of some of them we have very limited information. In addition, the materials properties are affected by the by the peculiar heterostructure we choose. We then need a way to compute material properties and to use those results in the device simulations.
This requires a specific multiscale simulation procedure because ab initio quantum chemisty tools (DFT) for the calculation of material properties are very demanding from the computational point of view.
For this reason, we have recently defined this multiscale methodology according to which we use an open source DFT tool (e.g., Quantum Espresso) to make ab initio calculations, in order to compute materials properties. Once we have a good single particle Hamiltonian, we derive a TB Hamiltonian, using Wannier90 to project the Hamiltonian on a basis of localized Wannier functions. Finally, we can perform NEGF calculations with our in-house code.
In the rest of this talk, I just want to use my time to show you a device that after our via negativa test is still promising for application, to come close to an end in a positive note.
The lateral heterostructure field-effect transistor is a transistor structure we proposed in 2011. The channel consists of a lateral heterostructure, where we have on the same single sheet regions of graphene and regions of another 2D material with comparable lattice. In our case, the part of the channel under the gate is made of a large gap material, such as boron nitride, because it has to stop the current in the off state, and outer regions of source and drain are made with graphene.
In 2012 a paper from Cornell demonstrated in experiments the possibility to pattern lateral graphene-boron nitride heterostructures, with graphene patterning and successive CVD regrowth of boron nitride. There were only limited electrical measurement in the original paper.
However, in 2013 this concept was demonstrated by this paper from HRL Laboratories where they used as a central region fluorinated graphene. The device works with very good Ion/Ioff ratio.
We used density-functional theory to obtain the band structure for different types of central region, and in the end we chose BC2N, which is lattice matched to graphene, and provides a valence band offsed with respect to graphene Dirac point of 0.64 eV.
If we want to compare the potential of these devices with CMOS, we need to look at the expectations of the ITRS, 2012 edition, since it only deals with planar transistors.
Let us consider the table below, showing expectation for high performance CMOS process (HP) and for low power CMOS (LP), from 2014 to 2026.
Gate length is shrinking from 22 to 6 nm, and the supply voltage is going down to half a volt. The ratio of the “on” current (Ion) to “off” current (Ioff) is larger than 10^4.
From the point of view of dynamic performance, we have two figures of merit: the delay time, which is the ratio of the charge variation in the device between the “on” state and the “off” state, to the “on” current. And the power delay product (or dynamic power indicator), which is the product of the supply voltage, Ion and the delay time.
The delay time is a measure of speed, of computational performance. The power-delay product is a measure of energy efficiency.
As you see here CMOS are expected to remain in the fraction of ps for delay time and fraction of femtojoule per micron for the PDP.
Now, we can compare at least LHFET with the ITRS2012. We chose the ITRS version because it focuses on planar transistors. The blue signs are for the high performance process, the red symbols for Low Power. We plot the On current, delay time, and power delay product as a function of the year of introduction. And we also include the BCN LHFETs with green triangles inserting them at the same year of introduction of the CMOS process with the same gate length.
You see, the delay time is better than in the case of HP CMOS, and the power-delay product is better than LP CMOS. Of course we are here considering defectless ballistic devices. But still, it is important, because the transistors based on vertical heterostructure of 2D materials, even in the most optimistic situation, provide for example a delay time that is orders of magnitude higher. More details can be found in a dedicated paper.
I am going to conclude now. The new semiconductor electronics represents a huge intellectual challenge, requiring us to address new materials, new geometries, and new physical mechanisms.
Therefore, we need to build a consistent body of knowledge drawing concepts from engineering, physics, and chemistry, and streamlining the interfaces between such disciplines. We have also to find ways to teach this new body of knowledge.
I gave this presentation on Graphene and 2D Electronics at the Marie-Curie Conference 2013 in Florence, on Nov. 25. It was an session with mindblowing presentations especially from renown surgeon Ugo Boggi and CMS emeritus spokesperson Guido Tonelli. As I knew, speaking just after a superstar physicist is really really challenging.
Everybody had to keep their presentations understandable by a general audience of researchers and Marie-Curie fellows from diverse disciplines: hard sciences, social sciences, and humanities. And to stay within 12 minutes. For this reason I think this presentation can be enjoyable by casual visitors of my website. Enjoy.
However, the whole session was humbling. I was really honoured and thankful to the organizing committee for the invitation. Most of all, it was great fun.
“There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the Thought Police plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all the time. But at any rate they could plug in your wire whenever they wanted to.”
His motive was political — obviously. His harm was exactly none — as JSTOR effectively acknowledged. But he deserved, your “career prosecutors” believed, to be deprived of his rights as a citizen (aka, a “felon,” no longer entitled to the political rights he fought to perfect) because of what he did.
Yet here’s the thing to remember on MLK weekend (even though my saying this violates a rule I believe in firmly, a kind of inverse to Godwin’s law, because though I believe these two great souls were motivated by exactly the same kind of justice, King’s cause was greater):
How many felonies was Martin Luther King, Jr., convicted of?
King, whose motives were political too, but who, unlike Aaron, triggered actions which caused real harm (as in physical damage).
What’s that number?
And how many was he even charged with in the whole of his career?
Two bogus charges (perjury and tax evasion) from Alabama, which an all-white jury acquitted him of.
These are basically data following a Post Christmas-party discussion. From the World bank database, GDP growth in the last 20 years in Italy, Germany, France, US, Euro Area. As can be seen, Italy has almost always been the worst performer, the “Maglia nera“.
My argument is simply that it is not a matter of left and right policies, or of left and right political figures. I am choosing a 20-year time span for a reason: In the last 20 years we had for about 10 years a “centre-left” administration, and for 10 years a “centre-right” administration. Can you see the difference in the data? I can’t, since both in good and in bad times we have lost growth opportunities.
For full disclosure, I have voted for the “centre-left” coalition in the last 20 years. Frankly, it was a no-brainer. Anyway, data show that centre-left and centre-right have no clue on how to restore growth in the Country, or in a region, or in a city. And in the few cases in which they have a small clue, they have no courage (this addition is my opinion, not embedded in the data, so it might be wrong).
I just found the name of a sickness I have. I knew I was not the only one, but I now have a confirmation, and I am relieved. Michael Crichton, in a great 2002 speech at the International Leadership Forum, called it “Gell-Mann Amnesia”. It was actually his invention, and name of a Nobel prize only served to confer imore authority to the malaise. To me is therefore Crichton Amnesia.
So, I suffer from Crichton Amnesia. These are the symptoms, in four steps.
I read a newspaper article, or listen to some news on the radio or tv on a topic I know in detail (for example, something related to physics, EE, research, academia).
I realize than the author/speaker has no understanding of what he/she is describing and of what really happened. He/she does not put any effort, and swaps cause and effect.
I decide that journalists are totally unreliable, and that one should never believe what they say.
A few seconds later, I forget my resolution, and I start reading or listening very carefully the next piece, on a topic I have no direct knowledge of, such as Middle East, the financial crisis, China, the prison system, and millions of other things. Indeed, it is amnesia.
I tried to cure myself in 2012, following as little news as possible. Going to the original sources, when the news was really important and time allowed.
But my complete recovery is still far away, and the cure will take a lot of time and effort.
I closely follow Alertme from 2006, since it was founded in Cambridge, UK. I really liked their idea of cloud-based home security and automation with Zigbee sensors before “cloud” was a term en vogue. It resonated with something we were (and still are) trying to do.
Then, they inexplicably (at least for me) switched the primary focus to smart energy, i.e. on ways to use the cloud-based sensor network to monitor and optimize energy use at home. I never understood the move. I made my calculations considering a family living in an apartment, and knowing our family energy consumption. I could not see how one could invest a 500-1000 euros and several hours in the hope to reduce their energy bill of a 50-100 euros per year. I thought the “home security” case was much more compelling. At the time, energy was a favored theme among VC, and I assumed this was a way for them to complete their series B round of 15M GBP.
Now, it seems to me they are refocusing again on cloud-based smart homes, with a minor accent on security. The nice video above details their vision. As distribution is concerned, they switched from direct website selling to partnering with British Gas and Lowe’s. Much larger volumes, smaller margins.
I am really eager to see whether they will still be able to mark their difference with respect to the pack of other home automation companies (no more the Apple of home automation).
Brilliant lecture on creativity by John Cleese of Monty Python fame (this is from 1991).
Even better, I also found the transcript. I know, John Cleese is as good as it gets, and it is also incredibly funny, but the lecture is too slow for me. I prefer to read. It used to be this way also when I was a student at the University: I skipped lectures as soon as I found a good and dense textbook.
Just for association of ideas, it comes to my mind a short gag of “The West Wing in mind”. I could not remember it exactly, so I just searched for the word “menu” in this site with the complete TWW scripts. Here it is: episode 11, series 7 (the video is below, at time 0:52):
The waitress walks up and hands them some menus.
Hi. Thank you.
Mmm-hm. Would you like to hear the specials?
Well, tonight we're featuring New Zealand lamb...
Is this from a list?
The specials, are they written down somewhere?
Um, yeah, they're right here.
Just give us that. We'll read. We're readers.
Whatever you want.
It's just easier that way, then you don't have to, you know, perform
Web search is powerful and superfast. No excuse for not having enough information on the past, no matter if for fun or for serious business.